zsh declare PROMPT using multiple lines - variables

I would like to declare my ZSH prompt using multiple lines and comments, something like:
PROMPT="
%n # username
#
%m # hostname
\ # space
%~ # directory
$
\ # space
"
(e.g. something like perl regex's "ignore whitespace mode")
I could swear I used to do something like this, but cannot find those old files any longer. I have searched for variations of "zsh declare prompt across multiple lines" but haven't quite found it.
I know that I can use \ for line continuation, but then we end up with newlines and whitespaces.
edit: Maybe I am misremembering about comments - here is an example without comments.

Not exactly what you are looking for, but you don't need to define PROMPT in a single assignment:
PROMPT="%n" # username
PROMPT+="#%m" # #hostname
PROMPT+=" %~" # directory
PROMPT+="$ "
Probably closer to what you wanted is the ability to join the elements of an array:
prompt_components=(
%n # username
" " # space
%m # hostname
" " # space
"%~" # directory
"$"
)
PROMPT=${(j::)prompt_components}
Or, you could let the j flag add the space delimiters, rather than putting them in the array:
# This is slightly different from the above, as it will put a space
# between the director and the $ (which IMO would look better).
# I leave it as an exercise to figure out how to prevent that.
prompt_components=(
"%n#%m" # username#hostname
"$~" # directory
"$"
)
PROMPT=${(j: :)prompt_components}

Related

Values of one file to change corresponding value in another file?

I need help with a simple dash script solution. A script reading the values of file: "Install.txt", (sample content):
TRUE 203
TRUE 301
TRUE 602
TRUE 603
The numbers will correspond with the same number (at the end of line) in file: "ExtraExt.sys", (sample content):
# Read $[EXTRA_DIR]/diaryThumbPlace #202
# Read $[CORE_DIR]/myBorderStyle #203
# Read $[EXTRA_DIR]/mMenu #301
# Read $[EXTRA_DIR]/dDecor #501
# Read $[EXTRA_DIR]/controlPg #601
# Read $[EXTRA_DIR]/DashToDock #602
# Read $[EXTRA_DIR]/deskSwitch #603
All lines are tagged (#). The script will untag the corresponding line that has the same number as in the file "install.txt". For example, TRUE 203 will untag the line ending with #203
# Read $[CORE_DIR]/myBorderStyle #203
to (without "#" before Read)
Read $[CORE_DIR]/myBorderStyle #203
I've searched for awk/sed solution, but this requires a loop to go through the numbers in Install.txt.
Any help is appreciated. Thank you.
you can try,
# store "203|301|602|603" in search variable
search=$(awk 'BEGIN{OFS=ORS=""}{if(NR>1){print "|"}print $2;}' Install.txt)
sed -r "s/^# (.*#($search))$/\1/g" ExtraExt.sys
you get,
# Read $[EXTRA_DIR]/diaryThumbPlace #202
Read $[CORE_DIR]/myBorderStyle #203
Read $[EXTRA_DIR]/mMenu #301
# Read $[EXTRA_DIR]/dDecor #501
# Read $[EXTRA_DIR]/controlPg #601
Read $[EXTRA_DIR]/DashToDock #602
Read $[EXTRA_DIR]/deskSwitch #603
or, -i option for edit in file
sed -i -r "s/^# (.*#($search))$/\1/g" ExtraExt.sys
One option using awk could be reading the first file Install.txt and store the values of the second field in arr
Reading the second file ExtraExt.sys you could get numbers of the last column using $NF and match that against the pattern ^#[0-9]+$
If there is a match, remove the # part from the match leaving only the number and check if the number is in arr.
If it is, print the current line without the leading #
awk '
FNR==NR{
arr[$2]
next
}
match ($NF, /^#[0-9]+$/) {
if (substr($NF, RSTART+1, RLENGTH) in arr) {
sub(/^#[[:space:]]+/, "", $0); print $0
}
}
' Install.txt ExtraExt.sys
Output
Read $[CORE_DIR]/myBorderStyle #203
Read $[EXTRA_DIR]/mMenu #301
Read $[EXTRA_DIR]/DashToDock #602
Read $[EXTRA_DIR]/deskSwitch #603

Append text line after multiline match and stop on first multiline match

I would need a sed or awk command, not script, that:
1) matches in file sequentially 2 strings:
# This configuration option has an automatic default value.
# filter = [ "a|.*/|" ]
This is required because any single string can occur in file more than once.
But two of such sequential strings are pretty unique to match them.
2) inserts/appends after matched strings this text line:
filter = ["a|sd.*|", "a|drbd.*|", "r|.*|"]
3) stops processing after first match and append
So, text file looks like this:
...
# filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
#
# This configuration option has an automatic default value.
# filter = [ "a|.*/|" ]
# Configuration option devices/global_filter.
# Limit the block devices that are used by LVM system components.
# Because devices/filter may be overridden from the command line, it is
# not suitable for system-wide device filtering, e.g. udev and lvmetad.
# Use global_filter to hide devices from these LVM system components.
# The syntax is the same as devices/filter. Devices rejected by
# global_filter are not opened by LVM.
# This configuration option has an automatic default value.
# global_filter = [ "a|.*/|" ]
# Configuration option devices/types.
# List of additional acceptable block device types.
# These are of device type names from /proc/devices, followed by the
...
I would need to have output like this:
...
# filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
#
# This configuration option has an automatic default value.
# filter = [ "a|.*/|" ]
filter = ["a|sd.*|", "a|drbd.*|", "r|.*|"]
# Configuration option devices/global_filter.
# Limit the block devices that are used by LVM system components.
# Because devices/filter may be overridden from the command line, it is
# not suitable for system-wide device filtering, e.g. udev and lvmetad.
# Use global_filter to hide devices from these LVM system components.
# The syntax is the same as devices/filter. Devices rejected by
# global_filter are not opened by LVM.
# This configuration option has an automatic default value.
# global_filter = [ "a|.*/|" ]
# Configuration option devices/types.
# List of additional acceptable block device types.
# These are of device type names from /proc/devices, followed by the
...
None of found examples on multiline sed examples on stackoverflow is working for me.
I tried F. Hauri example from this topic: Append a string after a multiple line match in bash
sed -e $'/^admin:/,/^$/{/users:/a\ NewUser\n}'
It works fine, when matching unique words, but did not work for matching sequential text lines like this:
# This configuration option has an automatic default value.
# filter = [ "a|.*/|" ]
and also adding '0, to sed expression to stop on first match did not work in that case.
Updated description to better describe what is the goal.
awk '
/^\s+\# This configuration option has an automatic default value\./{
found=1
}
found && !flag && /\s+\# filter = \[ \"a\|\.\*\/\|\" \]/{
flag=1
$0=$0 ORS ORS " filter = [\"a|sd.*|\", \"a|drbd.*|\", \"r|.*|\"]"
}
1
' test.conf > test.tmp && cp test.conf test.conf.bak && mv -f test.tmp test.conf

Perl 6 $*ARGFILES.handles in binary mode?

I'm trying out $*ARGFILES.handles and it seems that it opens the files in binary mode.
I'm writing a zip-merge program, that prints one line from each file until there are no more lines to read.
#! /usr/bin/env perl6
my #handles = $*ARGFILES.handles;
# say $_.encoding for #handles;
while #handles
{
my $handle = #handles.shift;
say $handle.get;
#handles.push($handle) unless $handle.eof;
}
I invoke it like this: zip-merge person-say3 repeat repeat2
It fails with: Cannot do 'get' on a handle in binary mode in block at ./zip-merge line 7
The specified files are text files (encoded in utf8), and I get the error message for non-executable files as well as executable ones (with perl6 code).
The commented out line says utf8 for every file I give it, so they should not be binary,
perl6 -v: This is Rakudo version 2018.10 built on MoarVM version 2018.10
Have I done something wrong, or have I uncovered an error?
The IO::Handle objects that .handles returns are closed.
my #*ARGS = 'test.p6';
my #handles = $*ARGFILES.handles;
for #handles { say $_ }
# IO::Handle<"test.p6".IO>(closed)
If you just want get your code to work, add the following line after assigning to #handles.
.open for #handles;
The reason for this is the iterator for .handles is written in terms of IO::CatHandle.next-handle which opens the current handle, and closes the previous handle.
The problem is that all of them get a chance to be both the current handle, and the previous handle before you get a chance to do any work on them.
(Perhaps .next-handle and/or .handles needs a :!close parameter.)
Assuming you want it to work like roundrobin I would actually write it more like this:
# /usr/bin/env perl6
use v6.d;
my #handles = $*ARGFILES.handles;
# a sequence of line sequences
my $line-seqs = #handles.map(*.open.lines);
# Seq.new(
# Seq.new( '# /usr/bin/env perl6', 'use v6.d' ), # first file
# Seq.new( 'foo', 'bar', 'baz' ), # second file
# )
for flat roundrobin $line-seqs {
.say
}
# `roundrobin` without `flat` would give the following result
# ('# /usr/bin/env perl6', 'foo'),
# ('use v6.d', 'bar'),
# ('baz')
If you used an array for $line-seqs, you will need to de-itemize (.<>) the values before passing them to roundrobin.
for flat roundrobin #line-seqs.map(*.<>) {
.say
}
Actually I personally would be more likely to write something similar to this (long) one-liner.
$*ARGFILES.handles.eagerĀ».openĀ».lines.&roundrobin.flat.map: *.put
:bin is always set in this type of objects. Since you are working on the handles, you should either read line by line as instructed on the example, or reset the handle so that it's not in binary mode.

grep start and end time from log file- AIX

Log file looks something like below
Process Beginning - 2016-04-02-00.36.13
Putting Files To daADadD for
File will move to /sadafJJHFASJFFASJ/
Extract Files :-/ASFDSHAF_ABC_2016-04-02.csv
/ASFDSHAF_ABC.2016-04-02.csv /
ASFDSHAF_ABC.2016-04-02.csv /ASFDSHAF_ABC.2016-04-02.csv /
Process Ending - 2016-04-02-00.36.36
Process Beginning - 2016-04-02-10.01.20
Putting Files To daADadD for
File will move to /sadafJJHFASJFFASJ/
Extract Files :-/sdshsdhsh_cvb.2016-04-02.csv
/sdshsdhsh_cvb.2016-04-02.csv /sdshsdhsh_cvb.2016-04-02.csv
Process Ending - 2016-04-02-10.01.21
There is multiple entries of pattern / Process Beginning - 2016-04-02/ /Process Ending - 2016-04-02/
how do I find entry or block which has pattern /ABC_2016-04-02.csv/ in between
You can do it with sed (at least with GNU sed 4, sorry no AIX sed to test with):
# read complete block into hold space
/Process Beginning/,/Process Ending/ {
H
}
# "test" for csvfile in the block
/Process Ending/ {
# get hold space (i.e. complete block)
x
# if s did a "substition" (i.e. csvfile occured in the block): print
s/ASFDSHAF_ABC.2016-04-02.csv/&/p
# clear hold space for next block
s/.*//
h
}
Say your log file is grplog.log and the script is grplog.sed then run the script like this: sed -n -f grplog.sed grplog.log.
Explanation:
The script reads all lines in your processing block into the hold space in the first part.
The second part triggered on the Process Ending line uses the p flag of the s command: if a substitution was made, print the new pattern space. Due to the use of & old and new pattern space are the same.

Create postfix aliases file from LDIF using awk

I want to create a Postfix aliases file from the LDIF output of ldapsearch.
The LDIF file contains records for approximately 10,000 users. Each user has at least one entry for the proxyAddresses attribute. I need to create an alias corresponding with each proxyAddress that meets the conditions below. The created aliases must point to sAMAccountName#other.domain.
Type is SMTP or smtp (case-insensitive)
Domain is exactly contoso.com
I'm not sure if the attribute ordering in the LDIF file is consistent. I don't think I can assume that sAMAccountName will always appear last.
Example input file
dn: CN=John Smith,OU=Users,DC=contoso,DC=com
proxyAddresses: SMTP:smith#contoso.com
proxyAddresses: smtp:John.Smith#contoso.com
proxyAddresses: smtp:jsmith#elsewhere.com
proxyAddresses: MS:ORG/ORGEXCH/JOHNSMITH
sAMAccountName: smith
dn: CN=Tom Frank,OU=Users,DC=contoso,DC=com
sAMAccountName: frank
proxyAddresses: SMTP:frank#contoso.com
proxyAddresses: smtp:Tom.Frank#contoso.com
proxyAddresses: smtp:frank#elsewhere.com
proxyAddresses: MS:ORG/ORGEXCH/TOMFRANK
Example output file
smith: smith#other.domain
John.Smith: smith#other.domain
frank: frank#other.domain
Tom.Frank: frank#other.domain
Ideal solution
I'd like to see a solution using awk, but other method are acceptable too. Here are the qualities that are most important to me, in order:
Simple and readable. Self-documenting is better than one-liners.
Efficient. This will be used thousands of times.
Idiomatic. Doing it "the awk way" would be nice if it doesn't compromise the first two goals.
What I've tried
I've managed to make a start on this, but I'm struggling to understand the finer points of awk.
I tried using csplit to create seperate files for each record in the LDIF output, but that seems wasteful since I only want a single file in the end.
I tried setting RS="" in awk to get complete records instead of individual lines, but then I wasn't sure where to go from there.
I tried using awk to split the big LIDF file into separate files for each record and then processing those with another shell script, but that seemed wasteful.
Here a gawk script which you could run like this: gawk -f ldif.awk yourfile.ldif
Please note: the multicharacter value of `RS' is a gawk extension.
$ cat ldif.awk
BEGIN {
RS = "\n\n" # Record separator: empty line
FS = "\n" # Field separator: newline
}
# For each record: loop twice through fields
{
# Loop #1 identifies the sAMAccountName
for (i = 1; i <= NF; i++) {
if ($i ~ /^sAMAccountName: /) {
sAN = substr($i, 17)
break
}
}
# Loop #2 prints output lines
for (i = 1; i <= NF; i++) {
if (tolower($i) ~ /smtp:.*#contoso.com$/) {
split($i, n, ":|#")
print n[3] ": " sAN "#other.domain"
}
}
}
Here is a way to do it using standard awk.
# Display the postfix alias(es) for the previous user (if any)
function dump() {
for(i in id) printf("%s: %s#other.domain\n",id[i],an);
delete id;
}
# store all email names for that user in the id array
/^proxyAddresses:.[Ss][Mm][Tt][Pp]:.*#contoso.com/ {gsub(/^.*:/,"");gsub(/#.*$/,"");id[i++]=$0}
# store the account name
/^sAMAccountName:/ {an=$2};
# When a new record is found, process the previous one
/^dn:/ {dump()}
# Process the last record
END {dump()}