I'm trying to create a script that removes my images that are not in DB
There is my code (Updated):
I have 1 problems:
Problem with the like syntax like '%$f%'
#!/bin/bash
db="intranet_carc_development"
user="benjamin"
for f in public/uploads/files/*
do
if [[ -f "$f" ]]
then
psql $db $user -t -v "ON_ERROR_STOP=1" \
-c 'select * from public.articles where content like "%'"$(basename "$f")"'%"' | grep . \
&& echo "exist" \
|| echo "doesn't exist"
fi
done
And I have the following error :
ERROR: column "%1YOLV3M4-VFb2Hydb0VFMw.png%" does not exist
LINE 1: select * from public.articles where content like "%1YOLV3M4-...
^
doesn't exist
ERROR: column "%wnj8EEd8wuJp4TdUwqrJtA.png%" does not exist
LINE 1: select * from public.articles where content like "%wnj8EEd8w...
EDIT : if i use \'%$f%\' for the like :
/purge_files.sh: line 12: unexpected EOF while looking for matching `"'
./purge_files.sh: line 16: syntax error: unexpected end of file
There are several issues with your code :
$f is public/uploads/files/FILENAME and i want only the FILENAME
You can use basename to circumvent that, by writing :
f="$(basename "$f")"
psql $db $user -c "select * from public.articles where content like '%$f%'"...
(The extra quotes are here to prevent issues if you have spaces and special characters in your file name)
your psql request will always return true even if no rows are found
your psql command will return true even if the request fails, unless you set the variable 'ON_ERROR_STOP' to 1
As shown in the linked questions, you can use the following syntax :
#!/bin/bash
set -o pipefail #needed because of the pipe to grep later on
db="intranet_carc_development"
user="benjamin"
for f in public/uploads/files/*
do
if [[ -f "$f" ]]
then
f="$(basename "$f")"
psql $db $user -t -v "ON_ERROR_STOP=1" \
-c "select * from public.articles where content like '%$f%'" | grep . \
&& echo "exist" \
|| echo "doesn't exist"
fi
done
Related
I am trying to create a bash script that will generate an SQL CREATE TABLE statement from a CSV file.
#!/bin/bash
# Check if the user provided a CSV file
if [ $# -eq 0 ]
then
echo "No CSV file provided."
exit 1
fi
# Check if the CSV file exists
if [ ! -f $1 ]
then
echo "CSV file does not exist."
exit 1
fi
# Get the table name from the CSV file name
table_name=$(basename $1 .csv)
# Extract the header row from the CSV file
header=$(head -n 1 $1)
# Split the header row into column names
IFS=',' read -r -a columns <<< "$header"
# Generate the PostgreSQL `CREATE TABLE` statement
echo "CREATE TABLE $table_name ("
for column in "${columns[#]}"
do
echo " $column TEXT,"
done
echo ");"
If I have a CSV file with three columns(aa,bb,cc), the generated statement does not have the last column for some reason.
Any idea what could be wrong?
If I do:
for a in "${array[#]}"
do
echo "$a"
done
I am getting:
aaa
bbb
ccc
But when add something into the string:
for a in "${array[#]}"
do
echo "$a SOMETHING"
done
I get:
aaa SOMETHING
bbb SOMETHING
SOMETHING
Thanks.
Your csv file has a '\r`
Try the next block for reproducing the problem.
printf -v header "%s,%s,%s\r\n" "aaa" "bbb" "ccc"
IFS=',' read -r -a columns <<< "$header"
echo "Show array"
for a in "${columns[#]}"; do echo "$a"; done
echo "Now with something extra"
for a in "${columns[#]}"; do echo "$a SOMETHING"; done
You should remove the '\r', what can be done with
IFS=',' read -r -a columns < <(tr -d '\r' <<< "${header}")
I have a bash script that will update a table based on a file. The way I have it it opens and closes for every line in the file and would like to understand how to open, perform all the updates, and then close. It is fine for a few updates but if it ever requires more than a few hundred could be really taxing on the system.
#!/bin/bash
file=/export/home/dncs/tmp/file.csv
dateFormat=$(date +"%m-%d-%y-%T")
LOGFILE=/export/home/dncs/tmp/Log_${dateFormat}.log
echo "${dateFormat} : Starting work" >> $LOGFILE 2>&1
while IFS="," read mac loc; do
if [[ "$mac" =~ ^([0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$ ]]; then
dbaccess thedb <<EndOfUpdate >> $LOGFILE 2>&1
UPDATE profile
SET local_code= '$loc'
WHERE mac_address = '$mac';
EndOfUpdate
else
echo "Error: $mac not valid format" >> $LOGFILE 2>&1
fi
IIH -i $mac >> $LOGFILE 2>&1
done <"$file"
Source File.
12:BF:20:04:BB:30,POR-4
12:BF:21:1C:02:B1,POR-10
12:BF:20:04:72:FD,POR-4
12:BF:20:01:5B:4F,POR-10
12:BF:20:C2:71:42,POR-7
This is more or less what I'd do:
#!/bin/bash
fmt_date() { date +"%Y-%m-%d.%T"; }
file=/export/home/dncs/tmp/file.csv
dateFormat=$(fmt_date)
LOGFILE="/export/home/dncs/tmp/Log_${dateFormat}.log"
exec >> $LOGFILE 2>&1
echo "${dateFormat} : Starting work"
valid_mac='/^\(\([0-9a-fA-F]\{2\}:\)\{5\}[0-9a-fA-F]\{2\}\),\([^,]*\)$/'
update_stmt="UPDATE profile SET local_code = '\3' WHERE mac_address = '\1';"
sed -n -e "$valid_mac s//$update_stmt/p" "$file" |
dbaccess thedb -
sed -n -e "$valid_mac d; s/.*/Error: invalid format: &/p" "$file"
sed -n -e "$valid_mac s//IIH -i \1/p" "$file" | sh
echo "$(fmt_date) : Finished work"
I changed the date format to a variant of ISO 8601; it is easier to parse. You can stick with your Y2K-non-compliant US-ish format if you prefer. The exec line arranges for standard output and standard error from here onwards to go to the log file. The sed command all use the same structure, and all use the same pattern match stored in a variable. This makes consistency easier. The first sed script converts the data into UPDATE statements (which are fed to dbaccess). The second script identifies invalid MAC addresses; it deletes valid ones and maps the invalid lines into error messages. The third script ignores invalid MAC addresses but generates a IIH command for each valid one. The script records an end time — it will allow you to assess how long the processing takes. Again, repetition is avoided by creating and using the fmt_date function.
Be cautious about testing this. I had a file data containing:
87:36:E6:5E:AC:41,loc-OYNK
B2:4D:65:70:32:26,loc-DQLO
ZD:D9:BA:34:FD:97,loc-PLBI
04:EB:71:0D:29:D0,loc-LMEE
DA:67:53:4B:EC:C4,loc-SFUU
I replaced the dbaccess with cat, and the sh with cat. The log file I relocated to the current directory — leading to:
#!/bin/bash
fmt_date() { date +"%Y-%m-%d.%T"; }
#file=/export/home/dncs/tmp/file.csv
file=data
dateFormat=$(fmt_date)
#LOGFILE="/export/home/dncs/tmp/Log_${dateFormat}.log"
LOGFILE="Log-${dateFormat}.log"
exec >> $LOGFILE 2>&1
echo "${dateFormat} : Starting work"
valid_mac='/^\(\([0-9a-fA-F]\{2\}:\)\{5\}[0-9a-fA-F]\{2\}\),\([^,]*\)$/'
update_stmt="UPDATE profile SET local_code = '\3' WHERE mac_address = '\1';"
sed -n -e "$valid_mac s//$update_stmt/p" "$file" |
cat
#dbaccess thedb -
sed -n -e "$valid_mac d; s/.*/Error: invalid format: &/p" "$file"
#sed -n -e "$valid_mac s//IIH -i \1/p" "$file" | sh
sed -n -e "$valid_mac s//IIH -i \1/p" "$file" | cat
echo "$(fmt_date) : Finished work"
After I ran it, the log file contained:
2017-04-27.14:58:20 : Starting work
UPDATE profile SET local_code = 'loc-OYNK' WHERE mac_address = '87:36:E6:5E:AC:41';
UPDATE profile SET local_code = 'loc-DQLO' WHERE mac_address = 'B2:4D:65:70:32:26';
UPDATE profile SET local_code = 'loc-LMEE' WHERE mac_address = '04:EB:71:0D:29:D0';
UPDATE profile SET local_code = 'loc-SFUU' WHERE mac_address = 'DA:67:53:4B:EC:C4';
Error: invalid format: ZD:D9:BA:34:FD:97,loc-PLBI
IIH -i 87:36:E6:5E:AC:41
IIH -i B2:4D:65:70:32:26
IIH -i 04:EB:71:0D:29:D0
IIH -i DA:67:53:4B:EC:C4
2017-04-27.14:58:20 : Finished work
The UPDATE statements would have gone to DB-Access. The bogus MAC address was identified. The correct IIH commands would have been run.
Note that piping the output into sh requires confidence that the data you generate (the IIH commands) will be clean.
In pure /bin/sh how can I distinguish between an empty variable, an unset variable and a not existing (not defined) variable.
Here are the case:
# Case 1: not existing
echo "${foo}"
# Case 2: unset
foo=
echo "${foo}"
# Case 3: Empty
foo=""
echo "${foo}"
Now I would like to check for each of those three cases.
If case 2 and case 3 are actually the same, then I must at least be able to distinguish between them and case 1.
Any idea?
UPDATE
Solved thanks to Matteo
Here is how the code looks like:
#foo <-- not defined
bar1=
bar2=""
bar3="a"
if ! set | grep '^foo=' >/dev/null 2>&1; then
echo "foo does not exist"
elif [ -z "${foo}" ]; then
echo "foo is empty"
else
echo "foo has a value"
fi
if ! set | grep '^bar1=' >/dev/null 2>&1; then
echo "bar1 does not exist"
elif [ -z "${bar1}" ]; then
echo "bar1 is empty"
else
echo "bar1 has a value"
fi
if ! set | grep '^bar2=' >/dev/null 2>&1; then
echo "bar2 does not exist"
elif [ -z "${bar2}" ]; then
echo "bar2 is empty"
else
echo "bar2 has a value"
fi
if ! set | grep '^bar3=' >/dev/null 2>&1; then
echo "bar3 does not exist"
elif [ -z "${bar3}" ]; then
echo "bar3 is empty"
else
echo "bar3 has a value"
fi
And the results:
foo does not exist
bar1 is empty
bar2 is empty
bar3 has a value
I dont know about sh, but in bash and dash you can do echo ${TEST:?Error} for case 1 vs. case 2/3. And from quick glance at wikibooks, it seems like it should work for bourne shell too.
You can use it like this in bash and dash (use $? to get the error code)
echo ${TEST:?"Error"}
bash: TEST: Error
[lf#dell:~/tmp/soTest] echo $?
1
[lf#dell:~/tmp/soTest] TEST2="ok"
[lf#dell:~/tmp/soTest] echo ${TEST2:?"Error"}
ok
[lf#dell:~/tmp/soTest] echo $?
0
[lf#dell:~/tmp/soTest] dash
$ echo ${TEST3:?"Error"}
dash: 1: TEST3: Error
$ TEST3=ok
$ echo ${TEST3:?"Error"}
ok
You can use ${var?} syntax to throw an error if var is unset and ${var:?} to throw an error if var is unset or empty. For a concrete example:
$ unset foo
$ test -z "${foo?unset}" && echo foo is empty || echo foo is set to $foo
-bash: foo: unset
$ foo=
$ test -z "${foo?unset}" && echo foo is empty || echo foo is set to $foo
foo is empty
$ foo=bar
$ test -z "${foo?unset}" && echo foo is empty || echo foo is set to $foo
foo is set to bar
I don't see a correct answer here yet that is POSIX compliant. First let me reiterate William Pursell's assertion that the code foo= is indeed setting the variable foo to an empty value the same as foo="". For foo to be unset, it must either never be set or unset with unset foo.
Matteo's answer is correct, but there are caveats. When you run set in bash and posix mode is disabled, it also prints all of the defined functions as well. This can be suppressed like this:
isvar() (
[ -n "$BASH" ] && set -o posix
set | grep -q "^$1="
)
By writing it as a sub-shell function, we don't need to worry about what the state of bash's posix setting after we're done.
However, you can still get false-positives from variables whose values contain carriage returns, so understand that this is not 100% foolproof.
$ a="
> b=2
> "
$ set | grep ^b=
b=2
So for maximum correctness you can exploit bash's -v test, when available.
isvar() {
if [ -n "$BASH" ]; then
[ -v "$1" ]
else
set | grep -q "^$1="
fi
}
Perhaps somebody has a library somewhere that supports other shells' extensions as well. Essentially, this is a weakness in the POSIX specification and it just hasn't been seen as warranting amendment.
You can use set
If no options or arguments are specified, set shall write the names and values of all shell variables in the collation sequence of the current locale. Each name shall start on a separate line, using the format:
You can list all the variables (set) and grep for the variable name you want to check
set | grep '^foo='
How do you hide the column names and row count in the output from psql?
I'm running a SQL query via psql with:
psql --user=myuser -d mydb --output=result.txt -c "SELECT * FROM mytable;"
and I'm expecting output like:
1,abc
2,def
3,xyz
but instead I get:
id,text
-------
1,abc
2,def
3,xyz
(3 rows)
Of course, it's not impossible to filter the top two rows and bottom row out after the fact, but it there a way to do it with only psql? Reading over its manpage, I see options for controlling the field delimiter, but nothing for hiding extraneous output.
You can use the -t or --tuples-only option:
psql --user=myuser -d mydb --output=result.txt -t -c "SELECT * FROM mytable;"
Edited (more than a year later) to add:
You also might want to check out the COPY command. I no longer have any PostgreSQL instances handy to test with, but I think you can write something along these lines:
psql --user=myuser -d mydb -c "COPY mytable TO 'result.txt' DELIMITER ','"
(except that result.txt will need to be an absolute path). The COPY command also supports a more-intelligent CSV format; see its documentation.
You can also redirect output from within psql and use the same option. Use \o to set the output file, and \t to output tuples only (or \pset to turn off just the rowcount "footer").
\o /home/flynn/queryout.txt
\t on
SELECT * FROM a_table;
\t off
\o
Alternatively,
\o /home/flynn/queryout.txt
\pset footer off
. . .
usually when you want to parse the psql generated output you would want to set the -A and -F ...
# generate t.col1, t.col2, t.col3 ...
while read -r c; do test -z "$c" || echo , $table_name.$c | \
perl -ne 's/\n//gm;print' ; \
done < <(cat << EOF | PGPASSWORD=${postgres_db_useradmin_pw:-} \
psql -A -F -v -q -t -X -w -U \
${postgres_db_useradmin:-} --port $postgres_db_port --host $postgres_db_host -d \
$postgres_db_name -v table_name=${table_name:-}
SELECT column_name
FROM information_schema.columns
WHERE 1=1
AND table_schema = 'public'
AND table_name =:'table_name' ;
EOF
)
echo -e "\n\n"
You could find example of the full bash call here:
Within a bash script I am running a sql query via 'psql -c '. Based off of the arguements given to the bash script, the where claus of the select command will be different. So basically I need to know if its possible to do something like this:
psql -c "select statement here until we get to the where clause at which point we break out of statement and do"
if (arg1 was given)
concatenate "where arg1" to the end of the above select statement
if (arg2 was given)
concatenate "where arg2" to the end of the above select statement
and so on for as many arguments. I know I could do this much easier in a sql function if I just passed the arguments but that really isnt an option. Thanks!
Edit: 5 seconds after posting this I realize I could just create a string before calling the psql command and then call the psql command on that. Doh!
psql -c "SELECT columns FROM table ${1:+WHERE $1} ${2:+WHERE $2}"
This uses the "use alternate value" substitution - ${VAR:+alternate} - where alternate is substituted if $VAR is set and not empty. If $VAR is empty, nothing will be substituted.
Save ascript, e.g. query.sh:
#!/bin/bash
query="select statement here until we get to the where clause at which point we break out of statement and do"
if [ $# -gt 0 ]
then
query+=" where $1"
shift
fi
while [ $# -gt 0 ]
then
query+=" and $1"
shift
fi
psql -c "$query"
Call it like
chmod +x ./query.sh
./query.sh "id in (1,2,3)" "modified_by='myname'"
SQL="select x,y,z FROM foobar"
if [ "$1" != "" ]
then
SQL="$SQL where $1"
fi
psql "$SQL"
stmt="select statement here until we get to the where clause at which point we break out of statement and do"
if (( $# > 0 ))
then
stmt="$stmt where $1"
fi
if (( $# > 1 ))
then
stmt="$stmt where $2"
fi
psql -c "$stmt"