Unable to understand a code in .screenrc - gnu-screen

I am not sure about the exact purpose of the following Rampion's code.
It should apparently execute command(s) at the cursor position.
# man-word.screen
# prevent messages from slowing this down
msgminwait 0
# copy word starting at cursor
copy # I am not sure why we need this
stuff " e "
# open a new window that waits for a word to run man on
# (and uses 'read' to pause on error)
screen -t man /bin/sh -c 'cat | xargs man || read' # option -c seems to mean execute
# feed that window the copied word
# be sure to enter '^M' as 'CTRL-V ENTER' and '^D' as 'CTRL-V CTRL-D' (in vim)
paste '.'
# should display as 'stuff "^M^D"'
stuff " "
# turn message waiting back on
msgminwait 1
# vi: ft=screen
The code is bound under ^g such that
bindkey -m ^f source /Users/masi/bin/screen/edit-file-under-cursor.screen
which is same as
bind f source /Users/masi/bin/screen/edit-file-under-cursor.screen
I run the code as my cursor is at the beginning of the following line
vim ~/.zshrc
I get a new buffer such that
alt text http://files.getdropbox.com/u/175564/screen-rampion.png
What is the purpose of the command?

So the command doesn't run arbitrary code. All it does is run man <whatever> in a new screen window if your cursor was over the word <whatever>.
The reason the copy command is there is that you need to tell screen that you want to copy something. You may not always be in screen's copy mode when over a path - for example, you could be using vim, and have vim's cursor over a path. If you are already in copy mode, then it's a no-op.
screen -t man /bin/sh -c 'cat | xargs man || read'
screen :: open a new window
-t man :: give it a title of man
/bin/sh -c 'cat | xargs man || read' :: run this command in the new window, rather than opening the default shell in the new window.
/bin/sh :: a shell program
-c 'cat | xargs man || read' :: run the given string as a script, rather than opening in interactive mode
cat | :: wait for user input (ended with a newline and a CTRL-D), then pipe it as user input to the next command
xargs man :: call man, using whatever's read from standard input as command line arguments for man
|| read :: if the previous commands return non-zero, wait for the user to hit enter
From your output it looks like
The -c part of the command isn't getting run, since it looks like a new shell (the $ is a hint).
The stuff "^M^D" part wasn't transcribed correctly. The next non-comment line after paste '.' should be entered, keystroke for keystroke, as :
's', 't', 'u', 'f', 'f', ' ', '"', <CTRL-V>, <ENTER>, <CTRL-V>, <CTRL-D>, '"'
If you had downloaded the file, rather than transcribed it, you may not have those issues.
Also, bindkey -m ^f is not the same as bind f. And neither bind a command to ^g.
bindkey -m ^f binds a command to <CTRL-f>, but only when in copy mode.
bind f binds a command to <CTRL-A> f, in all modes.

Related

Screen command for buffer

I want to read the contents of the file into the buffer and stdout it to the screen. I did this:screen -X readbuf /home/nitro/file|screen -X writebuf|cat /tmp/screen-exchangebut cat command showed me screen-exchange file with the previous result of readbuf command. If I do this commands separately - everything will be correct and I'll get the modified screen-exchange file.
How can I perform all three commands readbuf, writebuf and cat at once?
This works for me:
screen -X eval "readbuf /tmp/x" writebuf && cat /tmp/screen-exchange

Bash while read : output issue

Updated :
Initial issue :
Having a while read loop printing every line that is read
Answer : Put a done <<< "$var"
Subsequent issue :
I may need some explanations about some SHELL code :
I have this :
temp_ip=$($mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
That gets results looking like this :
<ip1> <site1>
<ip2> <site2>
<ip3> <site3>
<ip4> <site4>
up to 5000 ip_address
I did a "while loop" :
while [ `find $proc_dir -name snmpproc* | wc -l` -ge "$max_proc_snmpget" ];do
{
echo "sleeping, fping in progress";
sleep 1;
}
done
temp_ip=$($mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
while read ip codesite;do
{
sendSNMPGET $ip $snmp_community $code_site &
}
done<<<"$temp_ip"
And the sendSNMPGET function is :
sendSNMPGET() {
touch $procdir/snmpproc.$$
hostname=`snmpget -v1 -c $2 $1 sysName.0`
if [ "$hostname" != "" ]
then
echo "hi test"
fi
rm -f $procdir/snmpproc.$$
The $max_proc_snmpget is set to 30
At the execution, the read is ok, no more printing on screen, but child processes seems to be disoriented
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
Why can't it handle this ?
If temp_ip contains the name of a file that you want to read, then use:
done<"$temp_ip"
In your case, it appears that temp_ip is not a file name but contains the actual data that you want. In that case, use:
done<<<"$temp_ip"
Take care that the variable is placed inside double-quotes. That protects the data against the shell's word splitting which would result in the replacement of new line characters with spaces.
More details
In bash, an expression like <"$temp_ip" is called redirection. In this case in means that the while loop will get its standard input from the file called $temp_ip.
The expression <<<"$temp_ip" is called a here string. In this case, it means that the while loop will get its standard input from the data in the variable $temp_ip.
More information on both redirection and here strings in man bash.
Or you can parse the output of your initial command directly:
$mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table) | \
while read ip codesite
do
...
done
If you want to improve the performance and run some of the 5,000 SNMPGETs in parallel, I would recommend using GNU Parallel (here) like this:
$mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table) | parallel -k -j 20 -N 2 sendSNMPGET {1} $snmp_community {2}
The -k will keep the parallel output in order. The -j 20 will run up to 20 SNMPGETs in parallel at a time. The -N 2 means take 2 parameters from the mysql output per job (i.e. ip and codesite). {1} and {2} are your ip and codesite parameters.
http://www.gnu.org/software/parallel/
I propose to not store the result value but use it directly:
while read ip codesite
do
sendSNMPGET "$ip" "$snmp_community" "$code_site" &
done < <(
"$mysql" --skip-column-names -h "$db_address" -u "$db_user" -p"$db_passwd" "$db_name" \
-e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
This way you start the mysql command in a subshell and use its output as input to the while loop (similar to piping which here also is an option).
But I see some problems with that code: If you really start each sendSNMPGET command in the background, you very quickly will put a massive load on your computer. For each line you read another active background process is started. This can slow down your machine to the point where it is rendered useless.
I propose to not run more than 20 background processes at a time.
As you don't seem to have liked my answer with GNU Parallel, I'll show you a very simplistic way of doing it in parallel without needing to install that...
#!/bin/bash
MAX=8
j=0
while read ip code
do
(sleep 5; echo $ip $code) & # Replace this with your SNMPGET
((j++))
if [ $j -eq $MAX ]; then
echo -n Pausing with $MAX processes...
j=0
wait
fi
done < file
wait
This starts up to 8 processes (you can change it) and then waits for them to complete before starting another 8. You have already been shown how to feed your mysql stuff into the loop by other respondents in the second to last line of the script...
The key to this is the wait which will wait for all started processes to complete.

awk: setting environment variables directly from within an awk script

first post here, but been a lurker for ages. i have googled for ages, but cant find what i want (many abigious topic subjects which dont request what the topic suggests it does ...). not new to awk or scripting, just a little rusty :)
i'm trying to write an awk script which will set shell env values as it runs - for another bash script to pick up and use later on. i cannot simply use stdout from awk to report this value i want setting (i.e. "export whatever=awk cmd here"), as thats already directed to a 'results file' which the awkscript is creating (plus i have more than one variable to export in the final code anyway).
As an example test script, to demo my issue:
echo $MYSCRIPT_RESULT # returns nothing, not currently set
echo | awk -f scriptfile.awk # do whatever, setting MYSCRIPT_RESULT as we go
echo $MYSCRIPT_RESULT # desired: returns the env value set in scriptfile.awk
within scriptfile.awk, i have tried (without sucess)
1/) building and executing an adhoc string directly:
{
cmdline="export MYSCRIPT_RESULT=1"
cmdline
}
2/) using the system function:
{
cmdline="export MYSCRIPT_RESULT=1"
system(cmdline)
}
... but these do not work. I suspect that these 2 commands are creating a subshell within the shell awk is executing from, and doing what i ask (proven by touching files as a test), but once the "cmd"/system calls have completed, the subshell dies, unfortunatley taking whatever i have set with it - so my env setting changes dont stick from "the caller of awk"'s perspective.
so my question is, how do you actually set env variables within awk directly, so that a calling process can access these env values after awk execution has completed? is it actually possible?
other than the adhoc/system ways above, which i have proven fail for me, i cannot see how this could be done (other than writing these values to a 'random' file somewhere to be picked up and read by the calling script, which imo is a little dirty anyway), hence, help!
all ideas/suggestions/comments welcomed!
You cannot change the environment of your parent process. If
MYSCRIPT_RESULT=$(awk stuff)
is unacceptable, what you are asking cannot be done.
You can also use something like is described at
Set variable in current shell from awk
unset var
var=99
declare $( echo "foobar" | awk '/foo/ {tmp="17"} END {print "var="tmp}' )
echo "var=$var"
var=
The awk END clause is essential otherwise if there are no matches to the pattern declare dumps the current environment to stdout and doesn't change the content of your variable.
Multiple values can be set by separating them with spaces.
declare a=1 b=2
echo -e "a=$a\nb=$b"
NOTE: declare is bash only, for other shells, use eval with the same syntax.
You can do this, but it's a bit of a kludge. Since awk does not allow redirection to a file descriptor, you can use a fifo or a regular file:
$ mkfifo fifo
$ echo MYSCRIPT_RESULT=1 | awk '{ print > "fifo" }' &
$ IFS== read var value < fifo
$ eval export $var=$value
It's not really necessary to split the var and value; you could just as easily have awk print the "export" and just eval the output directly.
I found a good answer. Encapsulate averything in a subshell!
The comand declare works as below:
#Creates 3 variables
declare var1=1 var2=2 var3=3
ex1:
#Exactly the same as above
$(awk 'BEGIN{var="declare "}{var=var"var1=1 var2=2 var3=3"}END{print var}')
I found some really interesting uses for this technique. In the next exemple I have several partitions with labels. I create variables using the labels as variable names and the device name as variable values.
ex2:
#Partition data
lsblk -o NAME,LABEL
NAME LABEL
sda
├─sda1
├─sda2
├─sda5 System
├─sda6 Data
└─sda7 Arch
#Creates a subshell to execute the text
$(\
#Pipe lsblk to awk
lsblk -o NAME,LABEL | awk \
#Initiate the variable with the text for the declare command
'BEGIN{txt="declare "}'\
#Filters devices with labels Arch or Data
'/Data|Arch/'\
#Concatenate txt with itself plus text for the variables(name and value)
#substr eliminates the special caracters before the device name
'{txt=txt$2"="substr($1,3)" "}'\
#AWK prints the text and the subshell execute as a command
'END{print txt}'\
)
The end result of this is 2 variables: Data with value sda6 and Arch with value sda7.
The same exemple in a single line.
$(lsblk -o NAME,LABEL | awk 'BEGIN{txt="declare "}/Data|Arch/{txt=txt$2"="substr($1,3)" "}END{print txt}')

How do I iterate over all the lines output by a command in zsh?

How do I iterate over all the lines output by a command using zsh, without setting IFS?
The reason is that I want to run a command against every file output by a command, and some of these files contain spaces.
Eg, given the deleted file:
foo/bar baz/gamma
That is, a single directory 'foo', containing a sub directory 'bar baz', containing a file 'gamma'.
Then running:
git ls-files --deleted | xargs ls
Will report in that file being handled as two files: 'foo/bar', and '/baz/gamma'.
I need it to handle it as one file: 'foo/bar baz/gamma'.
If you want to run the command once for all the lines:
ls "${(#f)$(git ls-files --deleted)}"
The f parameter expansion flag means to split the command's output on newlines. There's a more general form (#s:||:) to split at an arbitrary string like ||. The # flag means to retain empty records. Somewhat confusingly, the whole expansion needs to be inside double quotes, to avoid IFS splitting on the output of the command substitution, but it will produce separate words for each record.
If you want to run the command for each line in turn, the portable idiom isn't particularly complicated:
git ls-filed --deleted | while IFS= read -r line; do ls $line; done
If you want to run the command as few times as the command line length limit permits, use zargs.
autoload -U zargs
zargs -- "${(#f)$(git ls-files --deleted)}" -- ls
Using tr and the -0 option of xargs, assuming that the lines don't contain \000 (NUL), which is a fair assumption due to NUL being one of the characters that can't appear in filenames:
git ls-files --deleted | tr '\n' '\000' | xargs -0 ls
this turns the line: foo/bar baz/gamma\n into foo/bar baz/gamma\000 which xargs -0 knows how to handle

Is it possible to create a multi-line string variable in a Makefile

I want to create a makefile variable that is a multi-line string (e.g. the body of an email release announcement). something like
ANNOUNCE_BODY="
Version $(VERSION) of $(PACKAGE_NAME) has been released
It can be downloaded from $(DOWNLOAD_URL)
etc, etc"
But I can't seem to find a way to do this. Is it possible?
Yes, you can use the define keyword to declare a multi-line variable, like this:
define ANNOUNCE_BODY
Version $(VERSION) of $(PACKAGE_NAME) has been released.
It can be downloaded from $(DOWNLOAD_URL).
etc, etc.
endef
The tricky part is getting your multi-line variable back out of the makefile. If you just do the obvious thing of using "echo $(ANNOUNCE_BODY)", you'll see the result that others have posted here -- the shell tries to handle the second and subsequent lines of the variable as commands themselves.
However, you can export the variable value as-is to the shell as an environment variable, and then reference it from the shell as an environment variable (NOT a make variable). For example:
export ANNOUNCE_BODY
all:
#echo "$$ANNOUNCE_BODY"
Note the use of $$ANNOUNCE_BODY, indicating a shell environment variable reference, rather than $(ANNOUNCE_BODY), which would be a regular make variable reference. Also be sure to use quotes around your variable reference, to make sure that the newlines aren't interpreted by the shell itself.
Of course, this particular trick may be platform and shell sensitive. I tested it on Ubuntu Linux with GNU bash 3.2.13; YMMV.
Another approach to 'getting your multi-line variable back out of the makefile' (noted by Eric Melski as 'the tricky part'), is to plan to use the subst function to replace the newlines introduced with define in your multi-line string with \n. Then use -e with echo to interpret them. You may need to set the .SHELL=bash to get an echo that does this.
An advantage of this approach is that you also put other such escape characters into your text and have them respected.
This sort of synthesizes all the approaches mentioned so far...
You wind up with:
define newline
endef
define ANNOUNCE_BODY
As of $(shell date), version $(VERSION) of $(PACKAGE_NAME) has been released.
It can be downloaded from $(DOWNLOAD_URL).
endef
someTarget:
echo -e '$(subst $(newline),\n,${ANNOUNCE_BODY})'
Note the single quotes on the final echo are crucial.
Assuming you only want to print the content of your variable on standard output, there is another solution :
do-echo:
$(info $(YOUR_MULTILINE_VAR))
Yes. You escape the newlines with \:
VARIABLE="\
THIS IS A VERY LONG\
TEXT STRING IN A MAKE VARIABLE"
update
Ah, you want the newlines? Then no, I don't think there's any way in vanilla Make. However, you can always use a here-document in the command part
[This does not work, see comment from MadScientist]
foo:
echo <<EOF
Here is a multiple line text
with embedded newlines.
EOF
Not completely related to the OP, but hopefully this will help someone in future.
(as this question is the one that comes up most in google searches).
In my Makefile, I wanted to pass the contents of a file, to a docker build command,
after much consternation, I decided to:
base64 encode the contents in the Makefile (so that I could have a single line and pass them as a docker build arg...)
base64 decode the contents in the Dockerfile (and write them to a file)
see example below.
nb: In my particular case, I wanted to pass an ssh key, during the image build, using the example from https://vsupalov.com/build-docker-image-clone-private-repo-ssh-key/ (using a multi stage docker build to clone a git repo, then drop the ssh key from the final image in the 2nd stage of the build)
Makefile
...
MY_VAR_ENCODED=$(shell cat /path/to/my/file | base64)
my-build:
#docker build \
--build-arg MY_VAR_ENCODED="$(MY_VAR_ENCODED)" \
--no-cache \
-t my-docker:build .
...
Dockerfile
...
ARG MY_VAR_ENCODED
RUN mkdir /root/.ssh/ && \
echo "${MY_VAR_ENCODED}" | base64 -d > /path/to/my/file/in/container
...
Why don't you make use of the \n character in your string to define the end-of-line? Also add the extra backslash to add it over multiple lines.
ANNOUNCE_BODY=" \n\
Version $(VERSION) of $(PACKAGE_NAME) has been released \n\
\n\
It can be downloaded from $(DOWNLOAD_URL) \n\
\n\
etc, etc"
Just a postscript to Eric Melski's answer: You can include the output of commands in the text, but you must use the Makefile syntax "$(shell foo)" rather than the shell syntax "$(foo)". For example:
define ANNOUNCE_BODY
As of $(shell date), version $(VERSION) of $(PACKAGE_NAME) has been released.
It can be downloaded from $(DOWNLOAD_URL).
endef
You should use "define/endef" Make construct:
define ANNOUNCE_BODY
Version $(VERSION) of $(PACKAGE_NAME) has been released.
It can be downloaded from $(DOWNLOAD_URL).
etc, etc.
endef
Then you should pass value of this variable to shell command. But, if you do this using Make variable substitution, it will cause command to split into multiple:
ANNOUNCE.txt:
echo $(ANNOUNCE_BODY) > $# # doesn't work
Qouting won't help either.
The best way to pass value is to pass it via environment variable:
ANNOUNCE.txt: export ANNOUNCE_BODY:=$(ANNOUNCE_BODY)
ANNOUNCE.txt:
echo "$${ANNOUNCE_BODY}" > $#
Notice:
Variable is exported for this particular target, so that you can reuse that environment will not get polluted much;
Use environment variable (double qoutes and curly brackets around variable name);
Use of quotes around variable. Without them newlines will be lost and all text will appear on one line.
This doesn't give a here document, but it does display a multi-line message in a way that's suitable for pipes.
=====
MSG = this is a\\n\
multi-line\\n\
message
method1:
#$(SHELL) -c "echo '$(MSG)'" | sed -e 's/^ //'
=====
You can also use Gnu's callable macros:
=====
MSG = this is a\\n\
multi-line\\n\
message
method1:
#echo "Method 1:"
#$(SHELL) -c "echo '$(MSG)'" | sed -e 's/^ //'
#echo "---"
SHOW = $(SHELL) -c "echo '$1'" | sed -e 's/^ //'
method2:
#echo "Method 2:"
#$(call SHOW,$(MSG))
#echo "---"
=====
Here's the output:
=====
$ make method1 method2
Method 1:
this is a
multi-line
message
---
Method 2:
this is a
multi-line
message
---
$
=====
With GNU Make 3.82 and above, the .ONESHELL option is your friend when it comes to multiline shell snippets. Putting together hints from other answers, I get:
VERSION = 1.2.3
PACKAGE_NAME = foo-bar
DOWNLOAD_URL = $(PACKAGE_NAME).somewhere.net
define nwln
endef
define ANNOUNCE_BODY
Version $(VERSION) of $(PACKAGE_NAME) has been released.
It can be downloaded from $(DOWNLOAD_URL).
etc, etc.
endef
.ONESHELL:
# mind the *leading* <tab> character
version:
#printf "$(subst $(nwln),\n,$(ANNOUNCE_BODY))"
Make sure, when copying and pasting the above example into your editor, that any <tab> characters are preserved, else the version target will break!
Note that .ONESHELL will cause all targets in the Makefile to use a single shell for all their commands.
GNU `make' manual, 6.8: Defining Multi-Line Variables
In the spirit of .ONESHELL, it's possible to get pretty close in .ONESHELL challenged environments:
define _oneshell_newline_
endef
define oneshell
#eval "$$(printf '%s\n' '$(strip \
$(subst $(_oneshell_newline_),\n, \
$(subst \,\/, \
$(subst /,//, \
$(subst ','"'"',$(1))))))' | \
sed -e 's,\\n,\n,g' -e 's,\\/,\\,g' -e 's,//,/,g')"
endef
An example of use would be something like this:
define TEST
printf '>\n%s\n' "Hello
World\n/$$$$/"
endef
all:
$(call oneshell,$(TEST))
That shows the output (assuming pid 27801):
>
Hello
World\n/27801/
This approach does allow for some extra functionality:
define oneshell
#eval "set -eux ; $$(printf '%s\n' '$(strip \
$(subst $(_oneshell_newline_),\n, \
$(subst \,\/, \
$(subst /,//, \
$(subst ','"'"',$(1))))))' | \
sed -e 's,\\n,\n,g' -e 's,\\/,\\,g' -e 's,//,/,g')"
endef
These shell options will:
Print each command as it is executed
Exit on the first failed command
Treat use of undefined shell variables as an error
Other interesting possibilities will likely suggest themselves.
I like alhadis's answer best. But to keep columnar formatting, add one more thing.
SYNOPSIS := :: Synopsis: Makefile\
| ::\
| :: Usage:\
| :: make .......... : generates this message\
| :: make synopsis . : generates this message\
| :: make clean .... : eliminate unwanted intermediates and targets\
| :: make all ...... : compile entire system from ground-up\
endef
Outputs:
:: Synopsis: Makefile
::
:: Usage:
:: make .......... : generates this message
:: make synopsis . : generates this message
:: make clean .... : eliminate unwanted intermediates and targets
:: make all ...... : compile entire system from ground-up
Not really a helpful answer, but just to indicate that 'define' does not work as answered by Ax (did not fit in a comment):
VERSION=4.3.1
PACKAGE_NAME=foobar
DOWNLOAD_URL=www.foobar.com
define ANNOUNCE_BODY
Version $(VERSION) of $(PACKAGE_NAME) has been released
It can be downloaded from $(DOWNLOAD_URL)
etc, etc
endef
all:
#echo $(ANNOUNCE_BODY)
It gives an error that the command 'It' cannot be found, so it tries to interpret the second line of ANNOUNCE BODY as a command.
It worked for me:
ANNOUNCE_BODY="first line\\nsecond line"
all:
#echo -e $(ANNOUNCE_BODY)
GNU Makefile can do things like the following. It is ugly, and I won't say you should do it, but I do in certain situations.
PROFILE = \
\#!/bin/sh.exe\n\
\#\n\
\# A MinGW equivalent for .bash_profile on Linux. In MinGW/MSYS, the file\n\
\# is actually named .profile, not .bash_profile.\n\
\#\n\
\# Get the aliases and functions\n\
\#\n\
if [ -f \$${HOME}/.bashrc ]\n\
then\n\
. \$${HOME}/.bashrc\n\
fi\n\
\n\
export CVS_RSH="ssh"\n
#
.profile:
echo -e "$(PROFILE)" | sed -e 's/^[ ]//' >.profile
make .profile creates a .profile file if one does not exist.
This solution was used where the application will only use GNU Makefile in a POSIX shell environment. The project is not an open source project where platform compatibility is an issue.
The goal was to create a Makefile that facilitates both setup and use of a particular kind of workspace. The Makefile brings along with it various simple resources without requiring things like another special archive, etc. It is, in a sense, a shell archive. A procedure can then say things like drop this Makefile in the folder to work in. Set up your workspace enter make workspace, then to do blah, enter make blah, etc.
What can get tricky is figuring out what to shell quote. The above does the job and is close to the idea of specifying a here document in the Makefile. Whether it is a good idea for general use is a whole other issue.
I believe the safest answer for cross-platform use would be to use one echo per line:
ANNOUNCE.txt:
rm -f $#
echo "Version $(VERSION) of $(PACKAGE_NAME) has been released" > $#
echo "" >> $#
echo "It can be downloaded from $(DOWNLOAD_URL)" >> $#
echo >> $#
echo etc, etc" >> $#
This avoids making any assumptions of on the version of echo available.
Use string substitution:
VERSION := 1.1.1
PACKAGE_NAME := Foo Bar
DOWNLOAD_URL := https://go.get/some/thing.tar.gz
ANNOUNCE_BODY := Version $(VERSION) of $(PACKAGE_NAME) has been released. \
| \
| It can be downloaded from $(DOWNLOAD_URL) \
| \
| etc, etc
Then in your recipe, put
#echo $(subst | ,$$'\n',$(ANNOUNCE_BODY))
This works because Make is substituting all occurrences of |  (note the space) and swapping it with a newline character ($$'\n'). You can think of the equivalent shell-script invocations as being something like this:
Before:
$ echo "Version 1.1.1 of Foo Bar has been released. | | It can be downloaded from https://go.get/some/thing.tar.gz | | etc, etc"
After:
$ echo "Version 1.1.1 of Foo Bar has been released.
>
> It can be downloaded from https://go.get/some/thing.tar.gz
>
> etc, etc"
I'm not sure if $'\n' is available on non-POSIX systems, but if you can gain access to a single newline character (even by reading a string from an external file), the underlying principle is the same.
If you have many messages like this, you can reduce noise by using a macro:
print = $(subst | ,$$'\n',$(1))
Where you'd invoke it like this:
#$(call print,$(ANNOUNCE_BODY))
Hope this helps somebody. =)
As an alternative you can use the printf command. This is helpful on OSX or other platforms with less features.
To simply output a multiline message:
all:
#printf '%s\n' \
'Version $(VERSION) has been released' \
'' \
'You can download from URL $(URL)'
If you are trying to pass the string as an arg to another program, you can do so like this:
all:
/some/command "`printf '%s\n' 'Version $(VERSION) has been released' '' 'You can download from URL $(URL)'`"