I would need a sed or awk command, not script, that:
1) matches in file sequentially 2 strings:
# This configuration option has an automatic default value.
# filter = [ "a|.*/|" ]
This is required because any single string can occur in file more than once.
But two of such sequential strings are pretty unique to match them.
2) inserts/appends after matched strings this text line:
filter = ["a|sd.*|", "a|drbd.*|", "r|.*|"]
3) stops processing after first match and append
So, text file looks like this:
...
# filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
#
# This configuration option has an automatic default value.
# filter = [ "a|.*/|" ]
# Configuration option devices/global_filter.
# Limit the block devices that are used by LVM system components.
# Because devices/filter may be overridden from the command line, it is
# not suitable for system-wide device filtering, e.g. udev and lvmetad.
# Use global_filter to hide devices from these LVM system components.
# The syntax is the same as devices/filter. Devices rejected by
# global_filter are not opened by LVM.
# This configuration option has an automatic default value.
# global_filter = [ "a|.*/|" ]
# Configuration option devices/types.
# List of additional acceptable block device types.
# These are of device type names from /proc/devices, followed by the
...
I would need to have output like this:
...
# filter = [ "a|^/dev/hda8$|", "r|.*/|" ]
#
# This configuration option has an automatic default value.
# filter = [ "a|.*/|" ]
filter = ["a|sd.*|", "a|drbd.*|", "r|.*|"]
# Configuration option devices/global_filter.
# Limit the block devices that are used by LVM system components.
# Because devices/filter may be overridden from the command line, it is
# not suitable for system-wide device filtering, e.g. udev and lvmetad.
# Use global_filter to hide devices from these LVM system components.
# The syntax is the same as devices/filter. Devices rejected by
# global_filter are not opened by LVM.
# This configuration option has an automatic default value.
# global_filter = [ "a|.*/|" ]
# Configuration option devices/types.
# List of additional acceptable block device types.
# These are of device type names from /proc/devices, followed by the
...
None of found examples on multiline sed examples on stackoverflow is working for me.
I tried F. Hauri example from this topic: Append a string after a multiple line match in bash
sed -e $'/^admin:/,/^$/{/users:/a\ NewUser\n}'
It works fine, when matching unique words, but did not work for matching sequential text lines like this:
# This configuration option has an automatic default value.
# filter = [ "a|.*/|" ]
and also adding '0, to sed expression to stop on first match did not work in that case.
Updated description to better describe what is the goal.
awk '
/^\s+\# This configuration option has an automatic default value\./{
found=1
}
found && !flag && /\s+\# filter = \[ \"a\|\.\*\/\|\" \]/{
flag=1
$0=$0 ORS ORS " filter = [\"a|sd.*|\", \"a|drbd.*|\", \"r|.*|\"]"
}
1
' test.conf > test.tmp && cp test.conf test.conf.bak && mv -f test.tmp test.conf
Related
I would like to declare my ZSH prompt using multiple lines and comments, something like:
PROMPT="
%n # username
#
%m # hostname
\ # space
%~ # directory
$
\ # space
"
(e.g. something like perl regex's "ignore whitespace mode")
I could swear I used to do something like this, but cannot find those old files any longer. I have searched for variations of "zsh declare prompt across multiple lines" but haven't quite found it.
I know that I can use \ for line continuation, but then we end up with newlines and whitespaces.
edit: Maybe I am misremembering about comments - here is an example without comments.
Not exactly what you are looking for, but you don't need to define PROMPT in a single assignment:
PROMPT="%n" # username
PROMPT+="#%m" # #hostname
PROMPT+=" %~" # directory
PROMPT+="$ "
Probably closer to what you wanted is the ability to join the elements of an array:
prompt_components=(
%n # username
" " # space
%m # hostname
" " # space
"%~" # directory
"$"
)
PROMPT=${(j::)prompt_components}
Or, you could let the j flag add the space delimiters, rather than putting them in the array:
# This is slightly different from the above, as it will put a space
# between the director and the $ (which IMO would look better).
# I leave it as an exercise to figure out how to prevent that.
prompt_components=(
"%n#%m" # username#hostname
"$~" # directory
"$"
)
PROMPT=${(j: :)prompt_components}
Using terraform v0.12.9 and building a file using template_file data source, I could not use double dollar signs $$ to treat input ${data_directory} as literals.
Looking for solution to sort out this in a right way or looking for any other suggestion or workaround that can help to create a file with this content.
I have tried to use double dollar sign (like in a code example below) to isolate this ${data_directory} as literals in a file output.
Here is a code that I'm trying to use to create postfix main.cf file with terraform:
variable "hostname" {
default = "test"
}
variable "domain_name" {
default = "test.com"
}
variable "fn_main_cf" {
default = "main.cf"
}
data "template_file" "main_cf" {
template = <<EOF
##
## Network settings
##
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
inet_interfaces = 127.0.0.1, ::1, 120.121.123.124, 2a03:b0a0:3:d0::5e79:4001
myhostname = ${var.hostname}.${var.domain_name}
###
### Outbound SMTP connections (Postfix as sender)
###
smtp_tls_session_cache_database = btree:$${data_directory}/smtp_scache
EOF
}
data "template_cloudinit_config" "main_cf" {
gzip = false
base64_encode = false
part {
filename = "${var.fn_main_cf}"
content_type = "text/cloud-config"
content = "${data.template_file.main_cf.rendered}"
}
}
resource "null_resource" "main_cf" {
triggers = {
template = "${data.template_file.main_cf.rendered}"
}
provisioner "local-exec" {
command = "echo \"${data.template_file.main_cf.rendered}\" > ~/projects/mail-server/files/etc/postfix/${var.fn_main_cf}"
}
}
As you can see there is a lot of variables and all this is working fine, but ${data_directory} should not be treated as variable but just as literals and should stay as it is in output file on a disk.
Expected output in the main.cf created file saved on a disk should be like following:
##
## Network settings
##
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
inet_interfaces = 127.0.0.1, ::1, 120.121.123.124, 2a03:b0a0:3:d0::5e79:4001
myhostname = test.test.com
###
### Outbound SMTP connections (Postfix as sender)
###
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
So ${data_directory} should not be treated by terraform as a terraform variable but just as group of characters, literals (a regular text input).
Running terraform plan the output with double dollar signs $$ is following:
Error: failed to render : <template_file>:11,43-57: Unknown variable; There is no variable named "data_directory".
template_file remains available in Terraform primarily for Terraform 0.11 users. In Terraform 0.12 there is no need to use template_file, because it has been replaced with two other features:
For templates in separate files, the built in templatefile function can render an external template from directly in the language, without the need for a separate provider and data source.
For inline templates (specified directly within the configuration) you can just write them in directly where they need to be, or factor them out via Local Values.
The local_file resource is also a better way to create a local file on disk than to use a local-exec provisioner. By using template_file and local-exec here you're forcing yourself to contend with two levels of additional escaping: Terraform template escaping to get the literal template into the template_file data source, and then shell escaping inside your provisioner.
Here's a more direct way to represent your template and your file:
variable "postfix_config_path" {
# Note that for my example this is expected to be the full path
# to the file, not just the filename. Terraform idiom is to be
# explicit about this sort of thing, rather than relying on
# environment variables like HOME.
type = string
}
locals {
postfix_config = <<-EOT
##
## Network settings
##
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
inet_interfaces = 127.0.0.1, ::1, 120.121.123.124, 2a03:b0a0:3:d0::5e79:4001
myhostname = ${var.hostname}.${var.domain_name}
###
### Outbound SMTP connections (Postfix as sender)
###
smtp_tls_session_cache_database = btree:$${data_directory}/smtp_scache
EOT
}
resource "local_file" "postfix_config" {
filename = var.postfix_config_path
content = local.postfix_config
}
As the local provider documentation warns, Terraform isn't really designed for directly managing files and other resources on a local machine. The local provider is there for unusual situations, and this might be one of those situations, in which case the above is a reasonable way to address it.
Note though that a more standard Terraform usage pattern would be for Terraform to be used to start up a new virtual machine that will run Postfix and pass in the necessary configuration via a vendor-specific user_data or metadata argument.
If the postfix server is managed separately from this Terraform configuration, then an alternative pattern is to arrange for Terraform to write the necessary data to a shared configuration store (e.g. AWS SSM Parameter Store, or HashiCorp Consul) and then use separate software on the postfix server to read that and update the main.cf file. For HashiCorp Consul, that separate software might be consul-template. Similar software exists for other parameter stores, allowing you to decouple the configuration of individual virtual machines from the configuration of your overall infrastructure.
I added a rule get_timezone_periods with wildcards in the input and output but is not working with error Missing input files for rule all
Manual typing the paths works
"data/raw/test1/ros/timezone.csv",
"data/raw/test3/t02/timezone.csv"
Using wildcards does not
"data/raw/{{db}}/{{user}}/timezone.csv"
My code:
SENSORS=["timezone", "touch"]
DBS_USERS={"test1":["ros"],
"test3":["t02"]}
def db_user_path(paths):
new_paths = []
for db, users in DBS_USERS.items():
for user in users:
for path in paths:
new_paths.append(path.replace("db/", db + "/").replace("user/", user+ "/"))
return new_paths
rule all:
input:
sensors = db_user_path(expand("data/raw/db/user/{sensor}.csv", sensor=SENSORS)),
timezone_periods = db_user_path(["data/processed/db/user/timezone_periods.csv"])
rule download_dataset:
input:
"data/external/{db}-{user}.participant"
output:
expand("data/raw/{{db}}/{{user}}/{sensor}.csv", sensor=SENSORS)
script:
"src/data/download_dataset.R"
rule get_timezone_periods:
input:
# This line below does not work
# "data/raw/{{db}}/{{user}}/timezone.csv"
# These two lines work
"data/raw/test1/ros/timezone.csv",
"data/raw/test3/t02/timezone.csv"
output:
# This line below does not work
# "data/processed/{{db}}/{{user}}/timezone_periods.csv"
# These two lines work
"data/processed/test1/ros/timezone_periods.csv",
"data/processed/test3/t02/timezone_periods.csv"
script:
"src/data/get_timezone_periods.R"
I just realised that I was adding an extra pair of curly braces, it should have been only {db}
I'm trying out $*ARGFILES.handles and it seems that it opens the files in binary mode.
I'm writing a zip-merge program, that prints one line from each file until there are no more lines to read.
#! /usr/bin/env perl6
my #handles = $*ARGFILES.handles;
# say $_.encoding for #handles;
while #handles
{
my $handle = #handles.shift;
say $handle.get;
#handles.push($handle) unless $handle.eof;
}
I invoke it like this: zip-merge person-say3 repeat repeat2
It fails with: Cannot do 'get' on a handle in binary mode in block at ./zip-merge line 7
The specified files are text files (encoded in utf8), and I get the error message for non-executable files as well as executable ones (with perl6 code).
The commented out line says utf8 for every file I give it, so they should not be binary,
perl6 -v: This is Rakudo version 2018.10 built on MoarVM version 2018.10
Have I done something wrong, or have I uncovered an error?
The IO::Handle objects that .handles returns are closed.
my #*ARGS = 'test.p6';
my #handles = $*ARGFILES.handles;
for #handles { say $_ }
# IO::Handle<"test.p6".IO>(closed)
If you just want get your code to work, add the following line after assigning to #handles.
.open for #handles;
The reason for this is the iterator for .handles is written in terms of IO::CatHandle.next-handle which opens the current handle, and closes the previous handle.
The problem is that all of them get a chance to be both the current handle, and the previous handle before you get a chance to do any work on them.
(Perhaps .next-handle and/or .handles needs a :!close parameter.)
Assuming you want it to work like roundrobin I would actually write it more like this:
# /usr/bin/env perl6
use v6.d;
my #handles = $*ARGFILES.handles;
# a sequence of line sequences
my $line-seqs = #handles.map(*.open.lines);
# Seq.new(
# Seq.new( '# /usr/bin/env perl6', 'use v6.d' ), # first file
# Seq.new( 'foo', 'bar', 'baz' ), # second file
# )
for flat roundrobin $line-seqs {
.say
}
# `roundrobin` without `flat` would give the following result
# ('# /usr/bin/env perl6', 'foo'),
# ('use v6.d', 'bar'),
# ('baz')
If you used an array for $line-seqs, you will need to de-itemize (.<>) the values before passing them to roundrobin.
for flat roundrobin #line-seqs.map(*.<>) {
.say
}
Actually I personally would be more likely to write something similar to this (long) one-liner.
$*ARGFILES.handles.eagerĀ».openĀ».lines.&roundrobin.flat.map: *.put
:bin is always set in this type of objects. Since you are working on the handles, you should either read line by line as instructed on the example, or reset the handle so that it's not in binary mode.
Is it possible with liquibase to generate changelogs from an existing database?
I would like to generate one xml changelog per table (not every create table statements in one single changelog).
If you look into documentation it looks like it generates only one changelog with many changesets (one for each table). So by default there is no option to generate changelogs per table.
While liquibase generate-changelog still doesn't support splitting up the generated changelog, you can split it yourself.
If you're using JSON changelogs, you can do this with jq.
I created a jq filter to group the related changesets, and combined it with a Bash script to split out the contents. See this blog post
jq filter, split_liquibase_changelog.jq:
# Define a function for mapping a changes onto its destination file name
# createTable and createIndex use the tableName field
# addForeignKeyConstraint uses baseTableName
# Default to using the name of the change, e.g. createSequence
def get_change_group: map(.tableName // .baseTableName)[0] // keys[0];
# Select the main changelog object
.databaseChangeLog
# Collect the changes from each changeSet into an array
| map(.changeSet.changes | .[])
# Group changes according to the grouping function
| group_by(get_change_group)
# Select the grouped objects from the array
| .[]
# Get the group name from each group
| (.[0] | get_change_group) as $group_name
# Select both the group name...
| $group_name,
# and the group, wrapped in a changeSet that uses the group name in the ID and
# the current user as the author
{ databaseChangelog: {
changeSet: {
id: ("table_" + $group_name),
author: env.USER,
changes: . } } }
Bash:
#!/usr/bin/env bash
# Example: ./split_liquibase_changelog.sh schema < changelog.json
set -e -o noclobber
OUTPUT_DIRECTORY="${1:-schema}"
OUTPUT_FILE="${2:-schema.json}"
# Create the output directory
mkdir --parents "$OUTPUT_DIRECTORY"
# --raw-output: don't quote the strings for the group names
# --compact-output: output one JSON object per line
jq \
--raw-output \
--compact-output \
--from-file split_liquibase_changelog.jq \
| while read -r group; do # Read the group name line
# Read the JSON object line
read -r json
# Process with jq again to pretty-print the object, then redirect it to the
# new file
(jq '.' <<< "$json") \
> "$OUTPUT_DIRECTORY"/"$group".json
done
# List all the files in the input directory
# Run jq with --raw-input, so input is parsed as strings
# Create a changelog that includes everything in the input path
# Save the output to the desired output file
(jq \
--raw-input \
'{ databaseChangeLog: [
{ includeAll:
{ path: . }
}
] }' \
<<< "$OUTPUT_DIRECTORY"/) \
> "$OUTPUT_FILE"
If you need to use XML changesets, you can try adapting this solution using an XML tool like XQuery instead.