Shell scrilpt : binary operator expected - while-loop

I have a bug in my shell script, I know it is at the while loop but I cant see it, I already check spaces. I also try to use test instead of parenthesis but I have the same error
If I comment the while loop my code works
line 45: [: -1e: binary operator expected
my code is
#!/bin/sh
# pe request
#$ -pe mpi_16 32
#### 16 core : 'mpi_16 16' || 24 core : 'mpi_24 24 '
# our Job name
#$ -N test3MD
#$ -S /bin/sh
#$ -q dulce.q
#### 16 core : '2687wv2.q' || 24 core : '2697v2.q'
#$ -V
#$ -cwd
# needs in
# $NSLOTS
# the number of tasks to be used
# $TMPDIR/machines
# a valid machiche file to be passed to mpirun
# enables $TMPDIR/rsh to catch rsh calls if available
echo "Got $NSLOTS slots."
cat $TMPDIR/machines
################ mpi execute #############################
MPI_HOME=/opt/intel/impi/4.0.0.028
MPI_EXEC=$MPI_HOME/bin64/mpirun
cd $SGE_O_WORKDIR
rm ./POTCAR
cat /share/VASP_POTCAR/PAW_PBE_VASP52/C/POTCAR >./POTCAR
cat /share/VASP_POTCAR/PAW_PBE_VASP52/Li/POTCAR >>./POTCAR
runVASP=/opt/vasp/vasp.5.4/vasp.5.4.1/bin/vasp_std
runVASP_NonCol=/opt/vasp/vasp.5.4/vasp.5.4.1/bin/vasp_ncl
runVASP_GAMMA=/opt/vasp/vasp.5.4/vasp.5.4.1/bin/vasp_gam
i=1
while [ $i -1e 10 ]
do
cp POSCAR POSCAR.$i
$MPI_EXEC -machinefile $TMPDIR/machines -n $NSLOTS $runVASP > stdout
cp CONTCAR POSCAR
cp REPORT REPORT.$i
cp HILLSPOT PENALTY
let i=i+1
done
Thanks

Related

GNU Parallel -q option causing BCP "unknown option" errors (different string quotes on local vs remote hosts)

Seeing very strange behavior where when when using gnu parallel to distribute export jobs using bcp from mssql-tools. It appears that when using the -q option for parallel, strings are interpreted differently on local host than on remote hosts.
Running only as a loop through files on local host, the bcp processes throws no errors
However, distributing the file exports with parallel, the bcp processes executing on the local host throw
/opt/mssql-tools/bin/bcp: unknown option
errors, while those executing on remote hosts (via a --sshloginfile param) finish successfully. The basic code being run looks like...
# setting some vars to pass
TO_SERVER_ODBCDSN="-D -S MyMSSQLServer"
TO_SERVER_IP="-S 172.18.54.22"
DB="$dest_db" #TODO: enforce being more careful with this value
TABLE="$tablename" # MUST exist beforehand, case matters
USER=$(tail -n+1 $source_home/mssql-creds.txt | head -1)
PASSWORD=$(tail -n+2 $source_home/mssql-creds.txt | head -1)
DATAFILES="/some/path/to/files/"
TARGET_GLOB="*.tsv"
RECOMMEDED_IMPORT_MODE='-c' # makes a HUGE difference, see https://stackoverflow.com/a/16310219/8236733
DELIMITER="\\\t" # (currently not used) DO NOT use format like "'\t'", nested quotes seem to cause hard-to-catch error, want "\t" literal
....
bcpexport() {
filename=$1
TO_SERVER_ODBCDSN=$2
DB=$3
TABLE=$4 # MUST exist beforehand, case matters
USER=$5
PASSWORD=$6
RECOMMEDED_IMPORT_MODE=$7 # makes a HUGE difference, see https://stackoverflow.com/a/16310219/8236733
DELIMITER=$8 # not currently used
WORKDIR=$9
LOGDIR=${10}
....
/opt/mssql-tools/bin/bcp "$TABLE" in "$localfile" \
$TO_SERVER_ODBCDSN \
-U $USER -P $PASSWORD \
-d $DB \
$RECOMMEDED_IMPORT_MODE
-t "\t" \
-e ${localfile}.bcperror.log
}
export -f bcpexport
parallelization_pernode=5
parallel -q -j $parallelization_pernode \
--sshloginfile $source_home/parallel-nodes.txt \
--env bcpexport \
bcpexport {} "$TO_SERVER_ODBCDSN" $DB $TABLE $USER $PASSWORD $RECOMMEDED_IMPORT_MODE $DELIMITER $workingdir $logdir \
::: $DATAFILES/$TARGET_GLOB #from hdfs nfs gateway
Looking at the bash interpretation of the processes (by running ps -aux | grep bcp on the hosts that parallelis given in the --sshloginfile) for the remote hosts we see...
/bin/bash -c bcpexport() { ... /opt/mssql-tools/bin/bcp "$TABLE" in "$localfile" $TO_SERVER_ODBCDSN -U $USER -P $PASSWORD -d $DB $RECOMMEDED_IMPORT_MODE; -t "\t" -e ${localfile}.bcperror.log; ...
for the local host, the bash interpretation is...
/bin/bash -c bcpexport() { ... /opt/mssql-tools/bin/bcp "$TABLE" in "$localfile" $TO_SERVER_ODBCDSN -U $USER -P $PASSWORD -d $DB $RECOMMEDED_IMPORT_MODE; -t "\t" -e ${localfile}.bcperror.log; ...
that is, they look the same.
My current thought is that the "\t" in the bcp command is being interpreted in a problematic way. Debugging parallel without vs with the -q option we see...
$ parallel -j 5 --sshloginfile ./parallel-nodes.txt echo "Number {}: Running on \`hostname\`: \t" ::: 1 2 3 4 5
Number 4: Running on HW04.ucera.local: t
Number 1: Running on HW04.ucera.local: t
Number 2: Running on HW03.ucera.local: t
Number 5: Running on HW03.ucera.local: t
Number 3: Running on HW02.ucera.local: t
$ parallel -q -j 5 --sshloginfile ./parallel-nodes.txt echo "Number {}: Running on \`hostname\`: \t" ::: 1 2 3 4 5
Number 1: Running on `hostname`:
Number 4: Running on `hostname`:
Number 3: Running on `hostname`: \t
Number 2: Running on `hostname`: \t
Number 5: Running on `hostname`: \t
The bcp command needs the "\t" literal not the "t" literal (and I suspect several other similar string corruptions (also I do believe that \t is the default for bcp anyway, but this is just an example and want to keep \t for code clarity)), but not sure how to get this for both local and remote nodes or even why this behavior differs by remote vs local.
Basically, need the the strings to be exactly the same for both local and remote hosts even if strings have spaces or escape characters in them (note, I think this used to not be the case (have older script on other machines that don't have this problem))
Not sure if this is counts more as a parallel problem or a bcp problem (currently thinking something is going wrong with the -q option in parallel, but not sure). Anyone have any debugging suggestions or fixes? Ideas of what could be happening?
Firstly, the reason why hostname is not expanded is due to -q. It quotes the ` so that it does not expand.
Secondly, I think what you see is the different behaviours in built-in echo and /bin/echo. Built-in echo depends on the shell. Here I compare echo \\\\t in different shells:
$ parallel --onall --tag -S sh#lo,bash#lo,csh#lo,tcsh#lo,ksh#lo,zsh#lo echo \\\\t ::: a
bash#lo \t a
tcsh#lo a
sh#lo a
ksh#lo \t a
zsh#lo a
csh#lo \t a
That does not, however, get you closer to a solution. If I were you I would use env_parallel to copy the environment variables. And if the login shell on the remote systems are not the same as your shell, then set PARALLEL_SHELL to force using that shell.
So:
#!/bin/bash
env_parallel --session
# setting some vars to pass
TO_SERVER_ODBCDSN="-D -S MyMSSQLServer"
:
:
PARALLEL_SHELL=bash env_parallel -q -j $parallelization_pernode ...
(no need to use neither --env nor 'export -f' when using 'env_parallel --session')
# Cleanup (not needed if this is the last line in the script)
env_parallel --end-session

grep pattern match with line number

I want to get the line number of the matched pattern but I have a condition that the pattern match should have 'digits' .
If I use
grep -ri -n "package $i " . | grep -P '\d'
then I would get the line number of the lines matching pattern but also I would get the lines with 'package ' without any digits:
Below output shows me line number 71 for 'package ca-certificates' but there are four more lines for gluterfs that I dont need . I dont need those lines as they dont have any digit in them .
for i in $(awk '{print $1}' ~/Version-pkgs)
do
grep -ri -n "package $i " . | grep -P '\d'
done
sh search-version-pkgs.sh
./core.pkglist:71:package ca-certificates 2017.2.14 65.0.1.el6_9 arch noarch
./dev.pkglist:1343:package glusterfs-devel \
./dev.pkglist:1346:package glusterfs-api-devel \
./dev.pkglist:1346:package glusterfs-api-devel \
./dev.pkglist:1346:package glusterfs-api-devel \
./dev.pkglist:1343:package glusterfs-devel \
./core.pkglist:234:package initscripts 9.03.58 1.0.3.el6_9.2prerel7.6.0.0.0_88.51.0 arch ${bestArch}
./core.pkglist:397:package nspr 4.13.1 1.el6
./dev.pkglist:859:package nspr-devel \
./dev.pkglist:859:package nspr-devel \
./core.pkglist:401:package nss 3.28.4 4.0.1.el6_9 arch ${bestArch}
Running below script gives me exact pattern match i.e. 'package ' but I would not get the line number of them
for i in $(awk '{print $1}' ~/Version-pkgs)
do
egrep -ri "package $i " . | grep -P '\d'
done
sh search-version-pkgs.sh
./core.pkglist:package ca-certificates 2017.2.14 65.0.1.el6_9 arch noarch
./core.pkglist:package initscripts 9.03.58 1.0.3.el6_9.2prerel7.6.0.0.0_88.51.0 arch ${bestArch}
./core.pkglist:package nspr 4.13.1 1.el6
./core.pkglist:package nss 3.28.4 4.0.1.el6_9 arch ${bestArch}
./core.pkglist:package nss-util 3.28.4 1.el6_9 arch ${bestArch}
./core.pkglist:package tzdata 2018e 3.el6 arch noarch
How can get the output with the line number along with the pattern match as file:lineno.:package pkgname digits
for i in $(cut -f1 ~/Version-pkgs)
do
grep -rin "package $i.*[0-9]" .
done
no need to use grep twice
Oneliner :
grep -rinf <(sed -E 's,([^ ]*).*,package \1.*[0-9],' ~/Version-pkgs) .

Combine two commands using GNU parallel for OCR project

I would like to write a script which runs a command to OCR pdfs, which deletes the resulting images, after the text files has been written.
The two commands I want to combine are the following.
This command create folders, extract pgm from each PDF and adds them into each folder:
time find . -name \*.pdf | parallel -j 4 --progress 'mkdir -p {.} && gs -dQUIET -dINTERPOLATE -dSAFER -dBATCH -dNOPAUSE -dPDFSETTINGS=/screen -dNumRenderingThreads=4 -sDEVICE=pgmraw -r300 -dTextAlphaBits=4 -sProcessColorModel=DeviceGray -sColorConversionStrategy=Gray -dOverrideICC -o {.}/{.}-%03d.pgm {}'
This commands does the OCR and deletes the resulting images (pgm):
time find . -name \*.pgm | parallel -j 4 --progress 'tesseract {} {.} -l deu_frak && rm {.}.pgm'
I would like to combine both commands so that the script deletes the pgm images after each OCR. If I run the above commands, the first will extract images and will eat up my disk space, then the second command would do the OCR and only after that delete the images as a last step.
So,
Create folder
Extract PGM from PDF
OCR from PGM to txt
Delete PGM images, which just have been used (missing)
Basically, I would like this 4 steps to be done in this order for each PDF separated and not for all PDF at once. How can I do this?
Edit:
My first attempt to solve my issues was to create the following command:
time find . -name \*.pdf | parallel -j 4 -m --progress --eta 'mkdir -p {.} && gs -dQUIET -dINTERPOLATE -dSAFER -dBATCH -dNOPAUSE -dPDFSETTINGS=/screen -dNumRenderingThreads=4 -sDEVICE=pgmraw -r300 -dTextAlphaBits=4 -sProcessColorModel=DeviceGray -sColorConversionStrategy=Gray -dOverrideICC -o {.}/{.}-%03d.pgm {}' && time find . -name \*.pgm | parallel -j 4 --progress --eta 'tesseract {} {.} -l deu_frak && rm {.}.pgm'
However, tesseract would not find the language package.
Updated Answer
I have not tested this please run it on a copy of a small subset of your files. You can turn off the messages with DEBUG: at the start if you are happy it looks good:
#!/bin/bash
# Declare a function for "parallel" to call
doit() {
# Get name of PDF with and without extension
withext="$1"
noext="$2"
echo "DEBUG: Processing $withext into $noext"
# Make output directory
mkdir -p "$noext"
# Extract as PGM into subdirectory
gs ... -o "$noext"/"${noext}-%03d.pgm $withext"
# Go to target directory or die with error message
cd "$noext" || { echo ERROR: Failed to cd to $noext ; exit 1; }
# OCR and remove each PGM
n=0
for f in *pgm; do
echo "DEBUG: OCR $f into $n"
tesseract "$f" "$n" -l deu_frak
echo "DEBUG: Remove $f"
rm "$f"
((n=n+1))
done
}
# Ensure the function is exported to subshells
export -f doit
find . -name \*.pdf -print0 | parallel -0 doit {} {.}
You should be able to test the doit() function without parallel by running:
doit someFile.pdf someFile
Original Answer
If you want to do lots of things for each argument in GNU Parallel, the simplest way is to declare a bash function and then call that.
It looks like this:
# Declare a function for "parallel" to call
doit() {
echo "$1" "$2"
# mkdir something
# extract PGM
# do OCR
# delete PGM
}
# Ensure the function is exported to subshells
export -f doit
find some files -print0 | parallel -0 doit {} {.}

How to capture CMake command line arguments?

I want to record the arguments passed to cmake in my generated scripts. E.g., "my-config.in" will be processed by cmake, it has definition like this:
config="#CMAKE_ARGS#"
After cmake, my-config will contain a line something like this:
config="-DLINUX -DUSE_FOO=y -DCMAKE_INSTALL_PREFIX=/usr"
I tried CMAKE_ARGS, CMAKE_OPTIONS, but failed. No documents mention this. :-(
I don't know of any variable which provides this information, but you can generate it yourself (with a few provisos).
Any -D arguments passed to CMake are added to the cache file CMakeCache.txt in the build directory and are reapplied during subsequent invocations without having to be specified on the command line again.
So in your example, if you first execute CMake as
cmake ../.. -DCMAKE_INSTALL_PREFIX:PATH=/usr
then you will find that subsequently running simply
cmake .
will still have CMAKE_INSTALL_PREFIX set to /usr
If what you're looking for from CMAKE_ARGS is the full list of variables defined on the command line from every invocation of CMake then the following should do the trick:
get_cmake_property(CACHE_VARS CACHE_VARIABLES)
foreach(CACHE_VAR ${CACHE_VARS})
get_property(CACHE_VAR_HELPSTRING CACHE ${CACHE_VAR} PROPERTY HELPSTRING)
if(CACHE_VAR_HELPSTRING STREQUAL "No help, variable specified on the command line.")
get_property(CACHE_VAR_TYPE CACHE ${CACHE_VAR} PROPERTY TYPE)
if(CACHE_VAR_TYPE STREQUAL "UNINITIALIZED")
set(CACHE_VAR_TYPE)
else()
set(CACHE_VAR_TYPE :${CACHE_VAR_TYPE})
endif()
set(CMAKE_ARGS "${CMAKE_ARGS} -D${CACHE_VAR}${CACHE_VAR_TYPE}=\"${${CACHE_VAR}}\"")
endif()
endforeach()
message("CMAKE_ARGS: ${CMAKE_ARGS}")
This is a bit fragile as it depends on the fact that each variable which has been set via the command line has the phrase "No help, variable specified on the command line." specified as its HELPSTRING property. If CMake changes this default HELPSTRING, you'd have to update the if statement accordingly.
If this isn't what you want CMAKE_ARGS to show, but instead only the arguments from the current execution, then I don't think there's a way to do that short of hacking CMake's source code! However, I expect this isn't what you want since all the previous command line arguments are effectively re-applied every time.
One way to store CMake command line arguments, is to have a wrapper script called ~/bin/cmake (***1) , which does 2 things:
create ./cmake_call.sh that stores the command line arguments
call the real cmake executable with the command line arguments
~/bin/cmake # code is shown below
#!/usr/bin/env bash
#
# Place this file into this location: ~/bin/cmake
# (with executable rights)
#
# This is a wrapper for cmake!
# * It calls cmake -- see last line of the script
# It also:
# * Creates a file cmake_call.sh in the current directory (build-directory)
# which stores the cmake-call with all it's cmake-flags etc.
# (It also stores successive calls to cmake, so that you have a trace of all your cmake calls)
#
# You can simply reinvoke the last cmake commandline with: ./cmake_call.sh !!!!!!!!!!
#
# cmake_call.sh is not created
# when cmake is called without any flags,
# or when it is called with flags such as --help, -E, -P, etc. (refer to NON_STORE_ARGUMENTS -- you might need to modify it to suit your needs)
SCRIPT_PATH=$(readlink -f "$BASH_SOURCE")
SCRIPT_DIR=$(dirname "$SCRIPT_PATH")
#http://stackoverflow.com/a/13864829
if [ -z ${SUDO_USER+x} ]; then
# var SUDO_USER is unset
user=$USER
else
user=$SUDO_USER
fi
#http://stackoverflow.com/a/34621068
path_append () { path_remove $1 $2; export $1="${!1}:$2"; }
path_prepend() { path_remove $1 $2; export $1="$2:${!1}"; }
path_remove () { export $1="`echo -n ${!1} | awk -v RS=: -v ORS=: '$1 != "'$2'"' | sed 's/:$//'`"; }
path_remove PATH ~/bin # when calling cmake (at the bottom of this script), do not invoke this script again!
# when called with no arguments, don't create cmake_call.sh
if [[ -z "$#" ]]; then
cmake "$#"
exit
fi
# variable NON_STORE_ARGUMENTS stores flags which, if any are present, cause cmake_call.sh to NOT be created
read -r -d '' NON_STORE_ARGUMENTS <<'EOF'
-E
--build
#-N
-P
--graphviz
--system-information
--debug-trycompile
#--debug-output
--help
-help
-usage
-h
-H
--version
-version
/V
--help-full
--help-manual
--help-manual-list
--help-command
--help-command-list
--help-commands
--help-module
--help-module-list
--help-modules
--help-policy
--help-policy-list
--help-policies
--help-property
--help-property-list
--help-properties
--help-variable
--help-variable-list
--help-variables
EOF
NON_STORE_ARGUMENTS=$(echo "$NON_STORE_ARGUMENTS" | head -c -1 `# remove last newline` | sed "s/^/^/g" `#begin every line with ^` | tr '\n' '|')
#echo "$NON_STORE_ARGUMENTS" ## for debug purposes
## store all the args
ARGS_STR=
for arg in "$#"; do
if cat <<< "$arg" | grep -E -- "$NON_STORE_ARGUMENTS" &> /dev/null; then # don't use echo "$arg" ....
# since echo "-E" does not do what you want here,
# but cat <<< "-E" does what you want (print minus E)
# do not create cmake_call.sh
cmake "$#"
exit
fi
# concatenate to ARGS_STR
ARGS_STR="${ARGS_STR}$(echo -n " \"$arg\"" | sed "s,\($(pwd)\)\(\([/ \t,:;'\"].*\)\?\)$,\$(pwd)\2,g")"
# replace $(pwd) followed by
# / or
# whitespace or
# , or
# : or
# ; or
# ' or
# "
# or nothing
# with \$(pwd)
done
if [[ ! -e $(pwd)/cmake_call.sh ]]; then
echo "#!/usr/bin/env bash" > $(pwd)/cmake_call.sh
# escaping:
# note in the HEREDOC below, \\ means \ in the output!!
# \$ means $ in the output!!
# \` means ` in the output!!
cat <<EOF >> $(pwd)/cmake_call.sh
#http://stackoverflow.com/a/34621068
path_remove () { export \$1="\`echo -n \${!1} | awk -v RS=: -v ORS=: '\$1 != "'\$2'"' | sed 's/:\$//'\`"; }
path_remove PATH ~/bin # when calling cmake (at the bottom of this script), do not invoke ~/bin/cmake but real cmake!
EOF
else
# remove bottom 2 lines from cmake_call.sh
sed -i '$ d' $(pwd)/cmake_call.sh
sed -i '$ d' $(pwd)/cmake_call.sh
fi
echo "ARGS='${ARGS_STR}'" >> $(pwd)/cmake_call.sh
echo "echo cmake \"\$ARGS\"" >> $(pwd)/cmake_call.sh
echo "eval cmake \"\$ARGS\"" >> $(pwd)/cmake_call.sh
#echo "eval which cmake" >> $(pwd)/cmake_call.sh
chmod +x $(pwd)/cmake_call.sh
chown $user: $(pwd)/cmake_call.sh
cmake "$#"
Usage:
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=$(pwd)/install ..
This will create cmake_call.sh with the following content:
#!/usr/bin/env bash
#http://stackoverflow.com/a/34621068
path_remove () { export $1="`echo -n ${!1} | awk -v RS=: -v ORS=: '$1 != "'$2'"' | sed 's/:$//'`"; }
path_remove PATH ~/bin # when calling cmake (at the bottom of this script), do not invoke ~/bin/cmake but real cmake!
ARGS=' "-DCMAKE_BUILD_TYPE=Debug" "-DCMAKE_INSTALL_PREFIX=$(pwd)/install" ".."'
echo cmake "$ARGS"
eval cmake "$ARGS"
The 3rd last line stores the cmake arguments.
You can now reinvoke the exact command-line that you used by simply calling:
./cmake_call.sh
Footnotes:
(***1) ~/bin/cmake is usually in the PATH because of ~/.profile. When creating ~/bin/cmake the very 1st time, it might be necessary to log out and back in, so that .profile sees ~/bin.
A very Linux specific way of achieving the same objective:
if(${CMAKE_SYSTEM_NAME} STREQUAL Linux)
file(STRINGS /proc/self/status _cmake_process_status)
# Grab the PID of the parent process
string(REGEX MATCH "PPid:[ \t]*([0-9]*)" _ ${_cmake_process_status})
# Grab the absolute path of the parent process
file(READ_SYMLINK /proc/${CMAKE_MATCH_1}/exe _cmake_parent_process_path)
# Compute CMake arguments only if CMake was not invoked by the native build
# system, to avoid dropping user specified options on re-triggers.
if(NOT ${_cmake_parent_process_path} STREQUAL ${CMAKE_MAKE_PROGRAM})
execute_process(COMMAND bash -c "tr '\\0' ' ' < /proc/$PPID/cmdline"
OUTPUT_VARIABLE _cmake_args)
string(STRIP "${_cmake_args}" _cmake_args)
set(CMAKE_ARGS "${_cmake_args}"
CACHE STRING "CMake command line args (set by end user)" FORCE)
endif()
message(STATUS "User Specified CMake Arguments: ${CMAKE_ARGS}")
endif()

tput: unknown terminal

I'm on AIX-6.1 and I'm trying to make use of tput inside my $PS1.
I've confirmed I can't even run tput from the commandline. Following is my session:
# tput
unknown terminal "xterm"
# echo $TERM
xterm
# tput -T ansi
unknown terminal "ansi"
In fact, ...
# ls /usr/lib/terminfo/x
x1700 xl83 xterm+pcc3 xterm+pcfkeys xterm-88color xterm-hp xterm-old xterm-vi
x1720 xtalk xterm+pcf0 xterm+pcfn xterm-8bit xterm-ic xterm-r5 xterm-vt220
x1750 xterm xterm+pcf1 xterm-16color xterm-basic xterm-mono xterm-r6 xterm-vt52
x820 xterm+pcc0 xterm+pcf2 xterm-24 xterm-bold xterm-new xterm-rep xterm-xfree86
xdku xterm+pcc1 xterm+pcf3 xterm-256color xterm-boldso xterm-noapp xterm-sco xterm-xmc
xitex xterm+pcc2 xterm+pcfN xterm-65 xterm-color xterm-nrc xterm-sun xterms
# ls /usr/lib/terminfo/x | wc -l
48
# for term in $(ls /usr/lib/terminfo/x) ; do tput -T $term ; done 2>&1 | grep 'unknown terminal' | wc -l
48
# for term in $(ls /usr/lib/terminfo/x) ; do TERM=$term tput ; done 2>&1 | grep 'unknown terminal' | wc -l
48
Any ideas? Thanks in advance.
Is your TERMINFO variable set? Without it, I believe the system won't find your terminfo files. Or perhaps it is set incorrectly?
If you're running sh, ksh, bash or similar, try:
export TERMINFO=/usr/lib/terminfo
If you're not sure what shell you're using (I'm pretty sure you do, but others might read this too), type:
echo $SHELL
If you're using csh, tcsh or similar, then you should instead type:
setenv TERMINFO /usr/lib/terminfo
After that, try running tput again.
I fixed this in Mac OS Catalina with,
export TERMINFO=/usr/share/terminfo