AIX Enable IOCP without smitty - aix

Hey I wanted to automate the process where I would be able to automate the process of enabling (making available) the I/O completion port (IOCP) in aix.
For some reason when I look around everywhere it says to use smitty, to enable it. Is it possible to do it without the use of smitty and if so how?
Or does anyone now if i was to still use smitty can I automate the process using a script?

Found out the procedure.
Since smit (smitty) runs external commands it was running mkdev -l iocp0 to make IOCP available

Don't forget that you can check its status with the lsdev command:
lsdev -Cc iocp

The two commands you need are:
mkdev -l iocp0
chdev -l iocp0 -P -a autoconfig='available'
The second makes sure it comes back available, otherwise when you reboot it could go back to "defined".

you can always get to know the underlying command which smitty executes by smiply pressing (esc + 6 ) .
COMMAND STATUS
Command: OK stdout: yes stderr: no
Before command completion, additional instructions may appear below.
iocp0 Available
┌──────────────────────────────────────────────────────────────────────────┐
│ SHOW COMMAND STRING │
│ │
│ Press Enter or Cancel to return to the │
│ application. │
│ │
│ mkdev -l iocp0 │
│ │
F1=Help F2=Refresh │ F1=Help F2=Refresh F3=Cancel │ Esc+6=Command
Esc+8=Image Esc+9=Shell│ Esc+8=Image Esc+0=Exit Enter=Do │ /=Find
n=Find Next └──────────────────────────────────────────────────────────────────────────┘

Related

Defining multiple cases for an Ansible variable based on multiple conditions

I have this variable here, set in a .yaml variables file
patch_plan: 'foo-{{ patch_plan_week_and_day }}-bar'
I want my patch_plan_week_and_day variable to be set dynamically, based on role and environment which are 2 other variables set elsewhere (doesn't matter now) outside this variables file.
For instance, I will explain 3 cases:
If role = 'master' and environment = 'srvb' then patch_plan_week_and_day = 'Week1_Monday' and thus the end result of patch_plan = 'foo-Week1_Monday-bar'.
If role != 'master' and environment = 'srvb' then patch_plan_week_and_day = 'Week1_Tuesday' and thus the end result of patch_plan = 'foo-Week1_Tuesday-bar'
If role = 'slave' and environment = 'pro' then patch_plan_week_and_day = 'Week3_Wednesday' and hus the end result of patch_plan = 'foo-Week3_Wednesday-bar'
This is the idea of the code:
patch_plan: 'foo-{{ patch_plan_week_and_day }}-bar'
# Patch Plans
## I want something like this:
# case 1
patch_plan_week_and_day: Week1_Monday
when: role == 'master' and environment == 'srvb'
# case 2
patch_plan_week_and_day: Week1_Tuesday
when: role != 'master' and environment == 'srvb'
# case 3
patch_plan_week_and_day: Week3_Wednesday
when: role == 'slave' and environment == 'pro'
I have 14 cases in total.
Put the logic into a dictionary. For example,
patch_plan_week_and_day_dict:
srvb:
master: Week1_Monday
default: Week1_Tuesday
pro:
slave: Week3_Wednesday
default: WeekX_Wednesday
Create the project for testing
shell> tree .
.
├── ansible.cfg
├── hosts
├── pb.yml
└── roles
├── master
│   ├── defaults
│   │   └── main.yml
│   └── tasks
│   └── main.yml
├── non_master
│   ├── defaults
│   │   └── main.yml
│   └── tasks
│   └── main.yml
└── slave
├── defaults
│   └── main.yml
└── tasks
└── main.yml
10 directories, 9 files
shell> cat ansible.cfg
[defaults]
gathering = explicit
inventory = $PWD/hosts
roles_path = $PWD/roles
retry_files_enabled = false
stdout_callback = yaml
shell> cat hosts
localhost
shell> cat pb.yml
- hosts: localhost
vars:
patch_plan_week_and_day_dict:
srvb:
master: Week1_Monday
default: Week1_Tuesday
pro:
slave: Week3_Wednesday
default: WeekX_Wednesday
roles:
- "{{ my_role }}"
The code of all roles is identical
shell> cat roles/master/defaults/main.yml
patch_plan_role: "{{ (my_role in patch_plan_week_and_day_dict[env].keys()|list)|
ternary(my_role, 'default') }}"
patch_plan_week_and_day: "{{ patch_plan_week_and_day_dict[env][patch_plan_role] }}"
shell> cat roles/master/tasks/main.yml
- debug:
var: patch_plan_week_and_day
Example 1.
shell> ansible-playbook pb.yml -e env=srvb -e my_role=master
...
patch_plan_week_and_day: Week1_Monday
Example 2.
shell> ansible-playbook pb.yml -e env=srvb -e my_role=non_master
...
patch_plan_week_and_day: Week1_Tuesday
Example 3.
shell> ansible-playbook pb.yml -e env=pro -e my_role=slave
...
patch_plan_week_and_day: Week3_Wednesday
A lot of considerations here ...
It seems you try to use Ansible as a programming language which it isn't. You've started to implement something without any description about your use case and what is actually the problem. The given example looks like an anti-pattern.
... set dynamically, based on role and environmentv ...
It is in fact "static" and based on the properties of the systems. You only try to generate the values at runtime. Timeslots when patches can or should be applied (Patch Window) are facts about the system and usually configured within the Configuration Management Database (CMDB). So this kind of information should be already there, either in a database or within the Ansible inventory or as a Custom fact on the system itself.
... which are 2 other variables set elsewhere (doesn't matter now) outside this variables file. ...
Probably it does matter and maybe you could configure the Patch Cycle or Patch Window there.
By pursuing your approach further you'll mix up Playbook Logic with Infrastructure Description or Configuration Properties leading fast into less readable and probably future unmaintainable code. You'll deny yourself the opportunity to maintain the system configuration within a Version Control System (VCS), CMDB or the inventory.
Therefore avoid CASE, SWITCH and IF THEN ELSE ELSEIF structures and describe the desired state of your systems instead.
Some Further Readings
In addition to the sources already given.
Best Practices - Content Organization
General tips
At last, this is what fixed it, thank you everyone
patch_plan: 'foo-{{ patch_plan_week_and_day[environment][role] }}-bar'
srvb:
master: Week1_Monday
slave: Week1_Tuesday
pre:
master: Week1_Sunday
slave: Week1_Friday
pro:
master: Week1_Thursday
slave: Week1_Wednesday

gzip command block returns "Too many levels of symbolic links"

Trying to perform a fairly simple gzip command across my fastq files, but a strange error returns.
#!/usr/bin/env nextflow
nextflow.enable.dsl=2
params.gzip = "sequences/sequences_split/sequences_trimmed/trimmed*fastq"
workflow {
gzip_ch = Channel.fromPath(params.gzip)
GZIP(gzip_ch)
GZIP.out.view()
}
process GZIP {
input:
path read
output:
stdout
script:
"""
gzip ${read}
"""
}
Error:
Command error:
gzip: trimmed_SRR19573319_R2.fastq: Too many levels of symbolic links
Tried running a loop in the script instead or run gzip on individual files which works, but would rather use the nextflow syntax.
By default, Nextflow will try to stage process input files using symbolic links. The problem is that gzip actually ignores symbolic links. From the GZIP(1) man page:
The gzip command will only attempt to compress regular files. In particular, it will ignore symbolic links.
If the objective is to create a reproducible workflow, it's usually best to avoid modifying the workflow inputs directly anyway. Either use the stageInMode directive to change how the input files are staged in. For example:
process GZIP {
stageInMode 'copy'
input:
path fastq
output:
path "${fastq}.gz"
"""
gzip "${fastq}"
"""
}
Or, preferably, just modify the command to redirect stdout to a file:
process GZIP {
input:
path fastq
output:
path "${fastq}.gz"
"""
gzip -c "${fastq}" > "${fastq}.gz"
"""
}
Michael!
I can't reproduce your issue. I created the folders in my current directory like you described and created four files in it, as you can see below:
➜ ~ tree sequences/
sequences/
└── sequences_split
└── sequences_trimmed
├── trimmed_a_fastq
├── trimmed_b_fastq
└── trimmed_c_fastq
Then I copy-pasted your Nextflow script file (the only change I did was to use gzip -f ${read} instead of without the -f option. Then everything worked fine. The reason you need -f is because Nextflow has every task contained to a subfolder within work. This means your input files are symbolically linked and gunzip will complain they're not regular files (happened here, macOS Ventura) or something like that (It may depend on OS? Not sure). The -f solves for this issue.
N E X T F L O W ~ version 22.10.1
Launching `ex2.nf` [golden_goldstine] DSL2 - revision: 70559e4bcb
executor > local (3)
[ad/447348] process > GZIP (1) [100%] 3 of 3 ✔
➜ ~ tree work
work
├── 0c
│   └── ded66d5f2e56cfa38d85d9c86e4e87
│   └── trimmed_a_fastq.gz
├── 67
│   └── 949c28cce5ed578e9baae7be2d8cb7
│   └── trimmed_c_fastq.gz
└── ad
└── 44734845950f28f658226852ca4200
└── trimmed_b_fastq.gz
They're gzip compressed files (even though they may look just like text files, depending on the demo content). I decided to reply with an answer because it allows me to use markdown to show you how I did it. Feel free to comment this answer if you want to discuss this topic.

HTTP reverse tunnel

Question
Is it possible to use a socks tunnel (or any other form of tunnel on port 80 or 443) to control the local machine that is creating the tunnel from the remote machine? Basically, a ssh -R [...] when ssh is not an option and only TCP connection on port 80 and 443 are possible?
Concrete scenario
Due to a very restrictive security policy of one of our customers, we currently have to connect to a Windows jump host without the ability to copy-and-paste stuff there. From there, we download needed files via web browser and copy via ssh to the target machine, or use ssh directly to do maintenance work on the target machine. However, this workflow is time-consuming, and honestly quite annoying.
Unfortunately, the firewall seems to be able to distinguish between real HTTP traffic and ssh as opening instructing sshd on our server to accept connections on 443 did not work.
Firewall
(HTTP only)
┌──────────────┐
│ │
│ ┌─────────┐ │ ??? ┌──────────┐
│ │Jumphost ├─┼───────►│Our Server│
│ │(Windows)│ │ └───▲──────┘
│ └──┬──────┘ │ │
│ │ssh │ │ssh
│ │ │ │
│ ┌─▼─────┐ │ ┌───┴─────┐
│ │Target │ │ │Developer│
│ │(Linux)│ │ │Machine │
│ └───────┘ │ └─────────┘
│ │
└──────────────┘
Any hints are highly appreciated 👍🏻
The problem seems to be a firewall with deep packet inspection.
You can overcome it with using ssh over ssl, using stunnel or openssl.
From the windows box you can tunnel with a stunnel client to our-server stunnel server.
That encapsulate all (ssh) data into ssl therefore there is no difference to a HTTPS connection.
Another option could be ptunnel-ng, it supports a tcp connection over ICMP (ping).
Most firewalls ignores ICMP, if you can ping your our-server this should work, too.
But ptunnel-ng seems sometimes a bit unstable.
If you can't install/execute programs on the windows jumbBox, you can open ports, redirect them by ssh and use them directly by the target-linux.
On your windows jumpbox:
ssh target -R target:7070:our-server:443
On the target (linux) you can use localhost:7070 to connect to our-server:443
I would recommend to use docker for the client and server parts.
I only can't use the ptunnel server inside a container, probably because of the required privileges.
Using ptunnel
On the server
The ptunnel binary is build inside docker, but used by the host directly
This sample expects an ubuntu server
Dockerfile.server
FROM ubuntu:latest
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y build-essential autoconf automake git
RUN mkdir -p /workdir
WORKDIR /workdir
RUN git clone https://github.com/lnslbrty/ptunnel-ng.git && cd ptunnel-ng && ./autogen.sh
start-server.sh
#!/bin/sh
# Starts the icmp tunnel server, this doesn't work inside a docker container
# Or perhaps it works, but I dont't know how
script_dir=$(cd "$(dirname "$0")"; pwd)
if [ ! -f $script_dir/ptunnel-ng ]; then
# Build the ptunnel binary and copy it to the host
docker build -t ptunnel-ng-build-server -f $script_dir/Dockerfile.server $script_dir
docker run --rm -v $script_dir:/shared ptunnel-ng-build-server cp /workdir/ptunnel-ng/src/ptunnel-ng /shared
fi
magic=${1-123456}
sudo $script_dir/ptunnel-ng --magic $magic
On the client
FROM alpine:latest as builder
ARG DEBIAN_FRONTEND=noninteractive
RUN apk add --update alpine-sdk bash autoconf automake git
RUN mkdir -p /workdir
WORKDIR /workdir
RUN git clone https://github.com/lnslbrty/ptunnel-ng.git && cd ptunnel-ng && ./autogen.sh
FROM alpine:latest
WORKDIR /workdir
COPY --from=builder /workdir/ptunnel-ng/src/ptunnel-ng .
start-client.sh
#!/bin/sh
image=ptunnel-ng
if ! docker inspect $image > /dev/null 2> /dev/null; then
docker build -t $image .
fi
magic=${1-123456}
ptunnel_host=${2-myserver.de}
port=${3-2001}
docker run --rm --detach -ti --name 'ptunnel1' -v $PWD:/shared -p 2222:2222 $image //workdir/ptunnel-ng --magic ${magic} -p${ptunnel_host} -l${port}
If you try to run the ptunnel client on termux, this can be done, but requires some small code changes

Apply visual styling to echo commands used in npm scripts via package.json

Have recently put together a build tool with npm and package.json scripts, and I have a few echo commands to state which parts of the pipeline are currently running.
For example (from my package.json):
{
"scripts": {
"clean": "rimraf a-directory/",
"preclean": "echo \"\n[ Cleaning build directories ]\n\""
}
}
When I Bash: npm run clean it prints my echo message, and then cleans the appropriate directory.
I'd like to change colour, font weight, background text colour to make these echo statements stand out and be more informative at a glance, but I've struggled even finding a starting point that can set me off being able to do this.
There's lots of info about doing this in regular CLI/Bash scripts, via grunt and gulp, or via JS scripts, but nothing I've found is attempting it from the scripts section of package.json.
What am I missing? All help appreciated.
Many thanks.
Consoles/terminals typically provide support for ANSI/VT100 Control sequences, so it's possible to use these codes to control font colour, font weight, background colour, etc.
For a Bash only solution refer to the Bash (MacOS/Linux/ etc..) section below.
However, if cross platform support is required then follow the solution described in the Cross Platform section below.
Bash (MacOS/Linux/ etc..)
Important Note: The following will not work successfully on non-bash consoles, such as Windows Command Prompt (i.e. cmd.exe) or PowerShell.
This example npm-script below:
"scripts": {
"clean": "rimraf a-directory/",
"preclean": "echo \"\\x1b[104m\\x1b[97m\n[ Cleaning build directories ]\n\\x1b[0m\""
}
...will log the something like the following in your console when running npm run clean(i.e. white text with a blue colored background):
Breakdown of the necessary syntax/codes:
<Esc> characters
┌─────────┬────┴─────────┐
│ │ │
┌─┴─┐ ┌─┴─┐ ┌─┴─┐
\\x1b[104m\\x1b[97m Mssg \\x1b[0m
└─┬─┘ └─┬┘└─┬─┘ └┬┘
│ │ │ │
│ │ │ │
│ │ │ │
│ │ │ Reset all attributes
│ │ │
│ │ Your Message
│ │
│ White Text
│
Light blue background
Further examples:
The following example npm-scripts provide further examples:
"scripts": {
"a": "echo \"\\x1b[1m\\x1b[39mBold Text\\x1b[0m\"",
"b": "echo \"\\x1b[91mLight Red Text\\x1b[0m\"",
"c": "echo \"\\x1b[94mLight Blue Text\\x1b[0m\"",
"d": "echo \"\\x1b[92mLight Green Text\\x1b[0m\"",
"e": "echo \"\\x1b[4m\\x1b[91mLight Red Underlined Text\\x1b[0m\"",
"f": "echo \"\\x1b[101m\\x1b[97mLight Red Background and White Text\\x1b[0m\"",
"g": "echo \"\\x1b[104m\\x1b[97mLight Blue Background and White Text\\x1b[0m\"",
"h": "echo \"\\x1b[30m\\x1b[103mLight Yellow Background and Black Text\\x1b[0m\"",
"i": "echo \"\\x1b[97m\\x1b[100mDark Gray Background and White Text\\x1b[0m\"",
"bash-echo-all": "npm run a -s && npm run b -s && npm run c -s && npm run d -s && npm run e -s && npm run f -s && npm run g -s && npm run h -s && npm run i -s"
},
Running npm run bash-echo-all -s using the scripts above will output the following to your console (the -s option just makes npm log a bit less):
A more comprehensive list of codes can be found at the link provided at the top of this post, (or see codes listed in Cross Platform section below), but remember not all ANSI/VT100 Control sequences are supported.
Cross Platform
For a cross-platform solution, (one which works successfully with Bash, Windows Command Prompt / cmd.exe, and PowerShell etc..), you'll need to create a nodejs utility script to handle the logging. This nodejs script can then be invoked via your npm-scripts.
The following steps describe how this can be achieved:
Create a nodejs utility script as follows:
echo.js
const args = process.argv;
const mssg = args[2];
const opts = [
'-s', '--set',
'-b', '--bg-color',
'-f', '--font-color'];
function excapeAnsiCode(code) {
return '\x1b[' + code + 'm';
}
const ansiStyles = opts.map(function (opt) {
return args.indexOf(opt) > -1
? excapeAnsiCode(args[args.indexOf(opt) +1])
: '';
});
console.log('%s%s%s', ansiStyles.join(''), mssg, '\x1b[0m');
Let's name the file echo.js and save it in the root of your project directory, i.e. in the same folder where package.json is stored.
Then, given your example, let's add a npm-script to package.json as follows:
"scripts": {
"clean": "rimraf a-directory/",
"preclean": "node echo \"[ Cleaning build directories ]\" --bg-color 104 --font-color 97"
}
When running npm run clean you'll see the same message logged to your console as before when using the bash only solution, i.e. white text with a blue colored background.
Overview of usage syntax for invoking echo.js via npm-scripts
node echo \"message\" [[-s|--set] number] [[-b|--bg-color] number] [[-f|--font-color] number]
node echo \"message\"
The node echo \"message\" part is mandatory. The message is where you enter your message to be logged and it must be wrapped in escaped double quotes \"...\" to prevent splitting.
The remaining parts, which are for the purpose of formatting/styling, are all optional and can be defined in any order. However, when used, they must proceed after the initial node echo \"message\" part, and be separated by a single space.
[--set|-s]
The --set option, or it's shorthand equivalent -s, followed by a single space, and one of the following ANSI codes can be used to specify general formatting:
┌─────────────────────────┐
│ Code Description │
├─────────────────────────┤
│ 1 Bold/Bright │
│ 2 Dim │
│ 4 Underlined │
│ 5 Blink │
│ 7 Reverse/invert │
│ 8 Hidden │
└─────────────────────────┘
Note: Codes 1 and 4 worked successfully with Bash, however they were not supported by Windows Command Prompt and Powershell. So if repeatability is important across platforms I recommend avoiding use of the --set|-s option entirely.
[--bg-color|-b]
The --bg-color option, or it's shorthand equivalent -b, followed by a single space, and one of the following ANSI codes can be used to specify the background color:
┌─────────────────────────┐
│ Code Background Color │
├─────────────────────────┤
│ 49 Default │
│ 40 Black │
│ 41 Red │
│ 42 Green │
│ 43 Yellow │
│ 44 Blue │
│ 45 Magenta │
│ 46 Cyan │
│ 47 Light Gray │
│ 100 Dark Gray │
│ 101 Light Red │
│ 102 Light Green │
│ 103 Light Yellow │
│ 104 Light Blue │
│ 105 Light Magenta │
│ 106 Light Cyan │
│ 107 White Cyan │
└─────────────────────────┘
[--font-color|-f]
The --font-color option, or it's shorthand equivalent -f, followed by a single space, and one of the following ANSI codes can be used to specify the font color:
┌─────────────────────────┐
│ Code Font Color │
├─────────────────────────┤
│ 39 Default │
│ 30 Black │
│ 31 Red │
│ 32 Green │
│ 33 Yellow │
│ 34 Blue │
│ 35 Magenta │
│ 36 Cyan │
│ 37 Light Gray │
│ 90 Dark Gray │
│ 91 Light Red │
│ 92 Light Green │
│ 93 Light Yellow │
│ 94 Light Blue │
│ 95 Light Magenta │
│ 96 Light Cyan │
│ 97 White Cyan │
└─────────────────────────┘
Further examples:
The following example scripts provide further examples:
"scripts": {
"r": "node echo \"Bold Text\" -s 1",
"s": "node echo \"Light Red Text\" -f 91",
"t": "node echo \"Light Blue Text\" -f 94",
"u": "node echo \"Light Green Text\" -f 92",
"v": "node echo \"Light Red Underlined Text\" -s 4 -f 91",
"w": "node echo \"Light Red Background and White Text\" -b 101 -f 97",
"x": "node echo \"Light Blue Background and White Text\" -b 104 -f 97",
"y": "node echo \"Light Yellow Background and Black Text\" -f 30 -b 103",
"z": "node echo \"Dark Gray Background and White Text\" -b 100 -f 97",
"node-echo-all": "npm run r -s && npm run s -s && npm run t -s && npm run u -s && npm run v -s && npm run w -s && npm run x -s && npm run y -s && npm run z -s"
},
Running npm run node-echo-all -s using the scripts above will output the the same results as shown in the Bash (MacOS/Linux/ etc..) section above.
For brevity these scripts (above) utilize the shorthand -s, -b, and -f options. However they can be substituted with their longhand equivalents --set, --bg-color, and --font-color respectively if necessary to make your code more human readable.

Templates are ignored

Description
I'm using my templates which are located inside ./templates that looks like:
./templates
└───auth
│ OAuth.mustache
│
└───living
ILivingOAuthClient.mustache
LivingAccessTokenResponse.mustache
LivingAuthzCodeResponse.mustache
LivingAuthzCodeValidator.mustache
LivingOAuthClient.mustache
OAuthzAuthzCodeResponse.mustache
Unfortunatlly, subfolder mustache templates are not generated. Only OAuth.mustache. Any ideas?
I'm using last version:
2.3.0
and I'm using this command line:
java -jar swagger-codegen-cli.jar generate -i http://guest1:8080/authz/cmng/swagger.json -l java -c config.json -t .\templates\