Tail command - follow by name on Solaris - filenames

Is there a way to tail a file by name on Solaris 10?
Equivalent to:
tail --follow=name
Manual for tail on solaris shows no such option. Only -f is included and it looks like it follows a file by descriptor.

According to the GNU tail manual, --follows is the same as -f:
-f, --follow[={name|descriptor}]
output appended data as the file grows;
an absent option argument means 'descriptor'
A -f option is found in the POSIX description of tail. However, the --follows option (which accepts an option value) is not in POSIX. The GNU manual goes on to describe the --follow option where it differs from -f:
With --follow (-f), tail defaults to following the file descriptor,
which means that even if a tail'ed file is renamed, tail will
continue to track its end. This default behavior is not desirable
when you really want to track the actual name of the file, not the
file descriptor (e.g., log rotation). Use --follow=name in that
case. That causes tail to track the named file in a way that
accommodates renaming, removal and creation.
That is, --follow provides for reopening the file if the actual file was renamed. POSIX does not appear to address this use case.
There is no direct equivalent in Solaris's differences from POSIX (compare /usr/bin/tail and /usr/xpg4/bin/tail in manual).
GNU tail is part of the coreutils package. You may already have it installed on Solaris 10, in /opt/sfw/bin/tail. For instance, pkginfo shows it on my Solaris 10 machine as SFWcoreu.

Solaris does not have --follow as you see.
The workaround is redirection:
tail -f inputfile > filewritten_by_tail

Your best bet is to install GNU find either from source or some freeware repository.
If you really want to stick with Solaris 10 bundled find, you'll need to wrap it with a custom monitoring program that will restart it should the target file is renewed.

Works like a charm on solaris 11!
/usr/bin/gtail -F

Related

scp fails with "protocol error: filename does not match request"

I have a script that uses SCP to pull a file from a remote Linux host on AWS. After running the same code nightly for about 6 months without issue, it started failing today with protocol error: filename does not match request. I reproduced the issue on some simpler filenames below:
$ scp -i $IDENT $HOST_AND_DIR/"foobar" .
# the file is copied successfully
$ scp -i $IDENT $HOST_AND_DIR/"'foobar'" .
protocol error: filename does not match request
# used to work, i swear...
$ scp -i $IDENT $HOST_AND_DIR/"'foobarbaz'" .
scp: /home/user_redacted/foobarbaz: No such file or directory
# less surprising...
The reason for my single quotes was that I was grabbing a file with spaces in the name originally. To deal with the spaces, I had done $HOST_AND_DIR/"'foo bar'" for many months, but starting today, it would only accept $HOST_AND_DIR/"foo\ bar". So, my issue is fixed, but I'm still curious about what's going on.
I Googled the error message, but I don't see any real mentions of it, which surprises me.
Both hosts involved have OpenSSL 1.0.2g in the output of ssh -v localhost, and bash --version says GNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)
Any ideas?
I ended up having a look through the source code and found the commit where this error is thrown:
GitHub Commit
remote->local directory copies satisfy the wildcard specified by the
user.
This checking provides some protection against a malicious server
sending unexpected filenames, but it comes at a risk of rejecting
wanted files due to differences between client and server wildcard
expansion rules.
For this reason, this also adds a new -T flag to disable the check.
They have added a new flag -T that will ignore this new check they've added so it is backwards compatible. However, I suppose we should look and find out why the filenames we're using are flagged as restricted.
In my case, I had [] characters in the filename that needed to be escaped using one of the options listed here. for example:
scp USERNAME#IP_ADDR:"/tmp/foo\[bar\].txt" /tmp

Cant start linux "screen" with logging to specific output file

I have the problem that I want to enable logging of a screen session at the start of it which then saves the log to a specific file.
What I have until now was:
screen -AmdSL cod2war /home/cod2server/scripts/service_28969.sh
while service_28969.sh is a shell script that will call other scripts which produce output.
I started multiple of those screen-sessions with different names, for example
screen -AmdSL cod2sd /home/cod2server/scripts/service_28962.sh
-L enables logging as the screen's man say, and will safe the ouput in a file called 'screenlog.0', now since I have multiple of those screens only one of it produces output saved in that log file (I can't find other 'screenlog.*' files in that folder).
I thought to use the -Logfile "file" option from the same man page, but it doesn't work for me and I can't find out what I'm doing wrong..
screen -Logfile cod2sd.log -AmdS cod2sd /home/u268450/cod2server/scripts/service_28962.sh
will produce the following error:
Use: screen [-opts] [cmd [args]]
or: screen -r [host.tty]
Options:
[...]
Error: Unknown option Logfile
and
screen -AmdS cod2sd /home/u268450/cod2server/scripts/service_28962.sh -Logfile cod2sd.log
will run without any error and start the screen but without the logging at all..
You can specify a logfile from within the default startup ~/.screenrc file using a line like
logfile mylog.log
To do this from the command line you can create a file mystartup to hold the above line, then use option -c mystartup to tell screen to read this file for setup instead of the default. If you also need to have ~/.screenrc read, you can add the source command to your startup file. The final result would look something like:
echo 'logfile mylog.log
source ~/.screenrc' >mystartup
screen -AmdSL cod2war -c mystartup /home/cod2server/scripts/service_28969.sh
This works for me:
screen -L -Logfile /Logs/Screen/`date +%Y%m%d`_screen.log
The configs I checked:
screen version 4.08.00 (GNU) 05-Feb-20 on FreeBSD 12.2
and
version 4.06.02 (GNU) 23-Oct-17 on Debian GNU/Linux 10 (buster)
and
version 4.00.03 (FAU) 23-Oct-06 on Mac OS X 10.9.5.
I just ran into this error myself and found this solution that worked with my python file, wanted to share for anyone else who might run into this issue:
screen -L -Logfile LOGFILENAME.LOG -dmS SCREENNAME python3 ./FILENAME.PY
I have no idea if this is the 'correct' way but it works.
-L enables logging
-Logfile LOGFILENAME.LOG declares what to call the log file and file format
-dmS SCREENNAME, dm runs in detached mode and S allows you to name the session
python3 ./FILENAME.PY in this case is my script but I assume that any other script here functions
I have tried a different ordering of these commands and this was the only way I managed to have them all run without issues. Hopes this helps.

script sh linux SQL procedure execution [duplicate]

I am making an NW.js app on macOS, and want to run the app in dev mode
by double-clicking on an icon.
In the first step, I'm trying to make my shell script work.
Using VS Code on Windows (I wanted to gain time), I have created a run-nw file at the root of my project, containing this:
#!/bin/bash
cd "src"
npm install
cd ..
./tools/nwjs-sdk-v0.17.3-osx-x64/nwjs.app/Contents/MacOS/nwjs "src" &
but I get this output:
$ sh ./run-nw
: command not found
: No such file or directory
: command not found
: No such file or directory
Usage: npm <command>
where <command> is one of: (snip commands list)
(snip npm help)
npm#3.10.3 /usr/local/lib/node_modules/npm
: command not found
: No such file or directory
: command not found
Some things I don't understand.
It seems that it takes empty lines as commands.
In my editor (VS Code) I have tried to replace \r\n with \n
(in case the \r creates problems) but it changes nothing.
It seems that it doesn't find the folders
(with or without the dirname instruction),
or maybe it doesn't know about the cd command ?
It seems that it doesn't understand the install argument to npm.
The part that really weirds me out, is that it still runs the app
(if I did an npm install manually)...
Not able to make it work properly, and suspecting something weird with
the file itself, I created a new one directly on the Mac, using vim this time.
I entered the exact same instructions, and... now it works without any
issues.
A diff on the two files reveals exactly zero difference.
What can be the difference? What can make the first script not work? How can I find out?
Update
Following the accepted answer's recommendations, after the wrong line
endings came back, I checked multiple things.
It turns out that since I copied my ~/.gitconfig from my Windows
machine, I had autocrlf=true, so every time I modified the bash
file under Windows, it re-set the line endings to \r\n.
So, in addition to running dos2unix (which you will have to
install using Homebrew on a Mac), if you're using Git, check your
.gitconfig file.
Yes. Bash scripts are sensitive to line-endings, both in the script itself and in data it processes. They should have Unix-style line-endings, i.e., each line is terminated with a Line Feed character (decimal 10, hex 0A in ASCII).
DOS/Windows line endings in the script
With Windows or DOS-style line endings , each line is terminated with a Carriage Return followed by a Line Feed character. You can see this otherwise invisible character in the output of cat -v yourfile:
$ cat -v yourfile
#!/bin/bash^M
^M
cd "src"^M
npm install^M
^M
cd ..^M
./tools/nwjs-sdk-v0.17.3-osx-x64/nwjs.app/Contents/MacOS/nwjs "src" &^M
In this case, the carriage return (^M in caret notation or \r in C escape notation) is not treated as whitespace. Bash interprets the first line after the shebang (consisting of a single carriage return character) as the name of a command/program to run.
Since there is no command named ^M, it prints : command not found
Since there is no directory named "src"^M (or src^M), it prints : No such file or directory
It passes install^M instead of install as an argument to npm which causes npm to complain.
DOS/Windows line endings in input data
Like above, if you have an input file with carriage returns:
hello^M
world^M
then it will look completely normal in editors and when writing it to screen, but tools may produce strange results. For example, grep will fail to find lines that are obviously there:
$ grep 'hello$' file.txt || grep -x "hello" file.txt
(no match because the line actually ends in ^M)
Appended text will instead overwrite the line because the carriage returns moves the cursor to the start of the line:
$ sed -e 's/$/!/' file.txt
!ello
!orld
String comparison will seem to fail, even though strings appear to be the same when writing to screen:
$ a="hello"; read b < file.txt
$ if [[ "$a" = "$b" ]]
then echo "Variables are equal."
else echo "Sorry, $a is not equal to $b"
fi
Sorry, hello is not equal to hello
Solutions
The solution is to convert the file to use Unix-style line endings. There are a number of ways this can be accomplished:
This can be done using the dos2unix program:
dos2unix filename
Open the file in a capable text editor (Sublime, Notepad++, not Notepad) and configure it to save files with Unix line endings, e.g., with Vim, run the following command before (re)saving:
:set fileformat=unix
If you have a version of the sed utility that supports the -i or --in-place option, e.g., GNU sed, you could run the following command to strip trailing carriage returns:
sed -i 's/\r$//' filename
With other versions of sed, you could use output redirection to write to a new file. Be sure to use a different filename for the redirection target (it can be renamed later).
sed 's/\r$//' filename > filename.unix
Similarly, the tr translation filter can be used to delete unwanted characters from its input:
tr -d '\r' <filename >filename.unix
Cygwin Bash
With the Bash port for Cygwin, there’s a custom igncr option that can be set to ignore the Carriage Return in line endings (presumably because many of its users use native Windows programs to edit their text files).
This can be enabled for the current shell by running set -o igncr.
Setting this option applies only to the current shell process so it can be useful when sourcing files with extraneous carriage returns. If you regularly encounter shell scripts with DOS line endings and want this option to be set permanently, you could set an environment variable called SHELLOPTS (all capital letters) to include igncr. This environment variable is used by Bash to set shell options when it starts (before reading any startup files).
Useful utilities
The file utility is useful for quickly seeing which line endings are used in a text file. Here’s what it prints for for each file type:
Unix line endings: Bourne-Again shell script, ASCII text executable
Mac line endings: Bourne-Again shell script, ASCII text executable, with CR line terminators
DOS line endings: Bourne-Again shell script, ASCII text executable, with CRLF line terminators
The GNU version of the cat utility has a -v, --show-nonprinting option that displays non-printing characters.
The dos2unix utility is specifically written for converting text files between Unix, Mac and DOS line endings.
Useful links
Wikipedia has an excellent article covering the many different ways of marking the end of a line of text, the history of such encodings and how newlines are treated in different operating systems, programming languages and Internet protocols (e.g., FTP).
Files with classic Mac OS line endings
With Classic Mac OS (pre-OS X), each line was terminated with a Carriage Return (decimal 13, hex 0D in ASCII). If a script file was saved with such line endings, Bash would only see one long line like so:
#!/bin/bash^M^Mcd "src"^Mnpm install^M^Mcd ..^M./tools/nwjs-sdk-v0.17.3-osx-x64/nwjs.app/Contents/MacOS/nwjs "src" &^M
Since this single long line begins with an octothorpe (#), Bash treats the line (and the whole file) as a single comment.
Note: In 2001, Apple launched Mac OS X which was based on the BSD-derived NeXTSTEP operating system. As a result, OS X also uses Unix-style LF-only line endings and since then, text files terminated with a CR have become extremely rare. Nevertheless, I think it’s worthwhile to show how Bash would attempt to interpret such files.
On JetBrains products (PyCharm, PHPStorm, IDEA, etc.), you'll need to click on CRLF/LF to toggle between the two types of line separators (\r\n and \n).
I was trying to startup my docker container from Windows and got this:
Bash script and /bin/bash^M: bad interpreter: No such file or directory
I was using git bash and the problem was about the git config, then I just did the steps below and it worked. It will configure Git to not convert line endings on checkout:
git config --global core.autocrlf input
delete your local repository
clone it again.
Many thanks to Jason Harmon in this link:
https://forums.docker.com/t/error-while-running-docker-code-in-powershell/34059/6
Before that, I tried this, that didn't works:
dos2unix scriptname.sh
sed -i -e 's/\r$//' scriptname.sh
sed -i -e 's/^M$//' scriptname.sh
If you're using the read command to read from a file (or pipe) that is (or might be) in DOS/Windows format, you can take advantage of the fact that read will trim whitespace from the beginning and ends of lines. If you tell it that carriage returns are whitespace (by adding them to the IFS variable), it'll trim them from the ends of lines.
In bash (or zsh or ksh), that means you'd replace this standard idiom:
IFS= read -r somevar # This will not trim CR
with this:
IFS=$'\r' read -r somevar # This *will* trim CR
(Note: the -r option isn't related to this, it's just usually a good idea to avoid mangling backslashes.)
If you're not using the IFS= prefix (e.g. because you want to split the data into fields), then you'd replace this:
read -r field1 field2 ... # This will not trim CR
with this:
IFS=$' \t\n\r' read -r field1 field2 ... # This *will* trim CR
If you're using a shell that doesn't support the $'...' quoting mode (e.g. dash, the default /bin/sh on some Linux distros), or your script even might be run with such a shell, then you need to get a little more complex:
cr="$(printf '\r')"
IFS="$cr" read -r somevar # Read trimming *only* CR
IFS="$IFS$cr" read -r field1 field2 ... # Read trimming CR and whitespace, and splitting fields
Note that normally, when you change IFS, you should put it back to normal as soon as possible to avoid weird side effects; but in all these cases, it's a prefix to the read command, so it only affects that one command and doesn't have to be reset afterward.
Coming from a duplicate, if the problem is that you have files whose names contain ^M at the end, you can rename them with
for f in *$'\r'; do
mv "$f" "${f%$'\r'}"
done
You properly want to fix whatever caused these files to have broken names in the first place (probably a script which created them should be dos2unixed and then rerun?) but sometimes this is not feasible.
The $'\r' syntax is Bash-specific; if you have a different shell, maybe you need to use some other notation. Perhaps see also Difference between sh and bash
Since VS Code is being used, we can see CRLF or LF in the bottom right depending on what's being used and if we click on it we can change between them (LF is being used in below example):
We can also use the "Change End of Line Sequence" command from the command pallet. Whatever's easier to remember since they're functionally the same.
For Notepad++ users, this can be solved by:
One more way to get rid of the unwanted CR ('\r') character is to run the tr command, for example:
$ tr -d '\r' < dosScript.py > nixScript.py
I ran into this issue when I use git with WSL.
git has a feature where it changes the line-ending of files according to the OS you are using, on Windows it make sure the line endings are \r\n which is not compatible with Linux which uses only \n.
You can resolve this problem by adding a file name .gitattributes to your git root directory and add lines as following:
config/* text eol=lf
run.sh text eol=lf
In this example all files inside config directory will have only line-feed line ending and run.sh file as well.
The simplest way on MAC / Linux - create a file using 'touch' command, open this file with VI or VIM editor, paste your code and save. This would automatically remove the windows characters.
If you are using a text editor like BBEdit you can do it at the status bar. There is a selection where you can switch.
For IntelliJ users, here is the solution for writing Linux script.
Use LF - Unix and masOS (\n)
Scripts may call each other.
An even better magic solution is to convert all scripts in the folder/subfolders:
find . -name "*.sh" -exec sed -i -e 's/\r$//' {} +
You can use dos2unix too but many servers do not have it installed by default.
For the sake of completeness, I'll point out another solution which can solve this problem permanently without the need to run dos2unix all the time:
sudo ln -s /bin/bash `printf 'bash\r'`

Multiple Job (j3)

I am trying to run a GNU make file with multiple jobs.
When I try executing ' make.exe -r -j3', the receive the following to errors:
make.exe: Do not specify -j or --jobs if sh.exe is not available.
make.exe: Resetting make for single job mode.
Do I have to add ' $(SH) -c' somewhere in the makefile? If so, where?
The error message suggests that make cannot find sh.exe. The file names indicate you are probably on CygWin. I would investigate setting the PATH to include the location of sh.exe, or defining the value of SHELL to the name (or, even, full path) of your shell.
Are you running this on Windows (more specifically, in the "windows" shell?). If you are, you might want to read this:
http://www.gnu.org/software/make/manual/make.html#Parallel
more specifically:
On MS-DOS, the ‘-j’ option has no effect, since that system doesn't support multi-processing.
Once again, assuming you're running on windows, you should get MinGW or CygWin

How detect whether running under valgrind in make file or shell script?

I need to detect whether my Makefile is running under valgrind (indirectly, using valgrind --trace-children=yes), I know how to do it from C but I have not found a way to do it from a script,
The earlier answers works on Linux only. For Mac OS X I am an going to grep for VALGRIND_STARTUP_PWD in the environment, unless someone has a better idea.
from a shell:
grep -q '/valgrind' /proc/$$/maps && echo "valgrindage"
This determines if the valgrind preloaded libraries are present in address map of the process. This is reasonably effective, but if you happen to have a non-valgrind related library that shares the '/valgrind' moniker then you will get a false positive (unlikely).
[I changed the grep pattern from vg_preload to /valgrind, as testing on Debian/Ubuntu revealed that the library name was different, while a directory match of valgrind is most likely to succeed.]