Python argparse: force use of a set of flags if some specific flags are present - argparse

I have the following specification I must follow for a side project:
script.py -T <token> [ -X (-C <carId> | -M <motorcycleId>) | -Y (-C <carId> | -M <motorcycleId>) | -Z (-C <carId> | -M <motorcycleId>) ] <file>
First, my script must have a token and file path provided at all times.
Second, the script may take one flag -X, -Y or -Z at a time. If any of those flags are used, the user must also provide the flag -C <carId> or -M <motorcycleId>.
I have a working script that implements this specification partially, but I couldn't come up with the logic for the second part.
parser = argparse.ArgumentParser()
required = parser.add_argument_group('required arguments')
required.add_argument('-T', metavar='<token>', required=True)
exclusive_commands = parser.add_mutually_exclusive_group()
exclusive_commands.add_argument('-X', action='store_true')
exclusive_commands.add_argument('-Y', action='store_true')
exclusive_commands.add_argument('-Z', action='store_true')
parser.add_argument('file', metavar='<file>')
How can I enforce the use of the flags -C <carId> | -M <motorcycleId> when the user uses any of the -X, -Y, -Z flags?

Related

How do I transfer small files quickly over the network with zstd?

As the question states, I want to backup many small files and send them via ssh to a destination. Does rsync speed things up significantly vs tar?
This works quite well, significantly faster than gzip.
Push (Upload)
tar -c --zstd src_dir | ssh user#dest_addr "cd dest_dir && tar -x --zstd"
This does the following
Creates a tar file using Zstd and outputs it via STDOUT
Connects via ssh, piping STDOUT over the network
Reads data from STDIN, and extracts it
Custom zstd flags
This uses maximum compression (default level is 3) and multithreading.
tar -c -I "zstd -19 -T0" src_dir | ssh user#dest_addr "cd dest_dir && tar -x --zstd"
With progress
tar -c --zstd src_dir | pv --timer --rate | ssh user#dst_addr "cd dest_dir && tar -x --zstd"
Pull (Download)
ssh user#dest_addr "tar --zstd -cf - src_dir" | tar -x --zstd --directory dest_dir

Gitlab Tee: collapsed multi-line command & unrecognized option: append

Inside of my GitLab CI file I have a file which is copied from the "Publish npm packages instruction",
before_script:
- |
{
echo "#${CI_PROJECT_ROOT_NAMESPACE}:registry=${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/npm/"
echo "${CI_API_V4_URL#https?}/projects/${CI_PROJECT_ID}/packages/npm/:_authToken=\${CI_JOB_TOKEN}"
} | tee --append .npmrc
When I try to run this in Alpine Linux I'm getting.
$ { # collapsed multi-line command
tee: unrecognized option: append
BusyBox v1.31.1 () multi-call binary.
Usage: tee [-ai] [FILE]...
Copy stdin to each FILE, and also to stdout
-a Append to the given FILEs, don't overwrite
-i Ignore interrupt signals (SIGINT)
The reason is simple, there are two implementations of tee,
Busybox, which Alpine uses, has tee but it doesn't provide an --append. It does provide an -a option (short for append) which is defined as "append to the given FILEs, do not overwrite"
GNU CoreUtils provides a copy of tee too it has --append which you're making use of here. It's also defined as "append to the given FILEs, do not overwrite". But as a shorthand, GNU Tee also provides the alias is -a.
So in short, if you want something to be compatible with Alpine and BusyBox as well as distros that ship GNU Tee then use -a (supported in both) instead of --append (supported only in GNU Tee).

tcpdump with -w -C -G and -z options

I'm trying to take continuous traces which are written to files that are limited by both duration (-G option) and size (-C option). The files are automatically named with the -w option, and finally the files are compressed with the -z gzip option. Altogether what I have is:
tcpdump -i eth0 -w /home/me/pcaps/MyTrace_%Y-%m-%d_%H%M%S.pcap -s 0 -C 100 -G 3600 -Z root -z gzip &
The problem is that with the -C option, the current file count is appended onto the name, so I wind up with files ending in: .pcap2.gz .pcap3.gz .pcap4.gz, etc. I would much prefer to have them end as: _2.pcap.gz _3.pcap.gz _4.pcap.gz, etc.
But if I remove .pcap from the -w option, I wind up with 2.gz 3.gz 4.gz
This could work if I could include options in the "-z" command like -z "gzip -S .pcap.gz" so that gzip itself appends the .pcap or if I could use an alias like pcap_gzip="gzip -S .pcap.gz" and then -z pcap_gzip, but neither option seems to be working, the latter producing this error: compress_savefile:execlp(gzip -S pcap.gz, /home/me/pcaps/MyTrace_2018-08-07_105308_27): No such file or directory
I encountered the same problem today, In CentOS6. I found your problem, but the answer did not work to me.
In fact, it only needs to be adjusted slightly, that is, the absolute path of the saved file name and the name of the script to be executed is written, for example
tcpdump -i em1 ... -s 0 -G 10 -w '/home/Svr01_std_%Y%m%d_%H%M%S.pcap' -Z root -z /home/pcapup2arcive.sh
I found out that although the alias doesn't work, I was able to put the same commands in a script and invoke the script via tcpdump -z.
pcap_gzip.sh:
#!/bin/bash
gzip -S .pcap.gz "$#"
Then:
tcpdump -i eth0 -w /home/me/pcaps/MyTrace_%Y-%m-%d_%H%M%S -s 0 -C 100 -G 3600 -Z root -z pcap_gzip.sh &

Copy all keys from one db to another in redis

Instade of move I want to copy all my keys from a particular db to another.
Is it possible in redis if yes than how ?
If you can't use MIGRATE COPY because of your redis version (2.6) you might want to copy each key separately which takes longer but doesn't require you to login to the machines themselves and allows you to move data from one database to another.
Here's how I copy all keys from one database to another (but without preserving ttls)
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=0
target_host=localhost
target_port=6379
target_db=1
#copy all keys without preserving ttl!
redis-cli -h $source_host -p $source_port -n $source_db keys \* | while read key; do
echo "Copying $key"
redis-cli --raw -h $source_host -p $source_port -n $source_db DUMP "$key" \
| head -c -1 \
| redis-cli -x -h $target_host -p $target_port -n $target_db RESTORE "$key" 0
done
Keys are not going to be overwritten, in order to do that, delete those keys before copying or simply flush the whole target database before starting.
Copies all keys from database number 0 to database number 1 on localhost.
redis-cli --scan | xargs redis-cli migrate localhost 6379 '' 1 0 copy keys
If you use the same server/port you will get a timeout error but the keys seem to copy successfully anyway. GitHub Redis issue #1903
redis-cli -a $source_password -p $source_port -h $source_ip keys /*| while read key;
do echo "Copying $key";
redis-cli --raw -a $source_password -h $source_ip -p $source_port -n $dbname DUMP "$key"| head -c -1| redis-cli -x -a $destination_password -h $destination_IP -p $destination_port RESTORE "$key" 0;
Latest solution:
Use the RIOT open-source command line tool provided by Redislabs to copy the data.
Reference: https://developer.redis.com/riot/riot-redis/cookbook.html#_performing_migration
GitHub project link: https://github.com/redis-developer/riot
How to install: https://developer.redis.com/riot/riot-redis/
# Source Redis db
SH=test1-redis.com
SP=6379
# Target Redis db
TH=test1-redis.com
TP=6379
# Copy from db0 to db1 (standalone Redis db, Or cluster mode disabled)
#
riot-redis -h $SH -p $SP --db 0 replicate -h $TH -p $TP --db 1 --batch 10000 \
--scan-count 10000 \
--threads 4 \
--reader-threads 4 \
--reader-batch 500 \
--reader-queue 2000 \
--reader-pool 4
RIOT is quicker, supports multithreading, and works well with cross-environment Redis data copy ( AWS Elasticache, Redis OSS, and Redislabs ).
Not directly. I would suggest to use the always convenient redis-rdb-tools package (from Sripathi Krishnan) to extract the data from a normal rdb dump, and reinject it to another instance.
See https://github.com/sripathikrishnan/redis-rdb-tools
As far as I understand you need to copy keys from a particular DB (e.g 5 ) to a particular DB say 10. If that is the case you can use redis database dumper (https://github.com/r043v/rdd). Although as per documentation it has a switch (-d) to select a database for operation but didn't work for me, so what I did
1.) Edit the rdd.c file and look for int main(int argc,char argv) function
2.) Change the DB to as per your requirement
3.) compile the src by **make
4.) Dump all keys using ./rdd -o "save.rdd"
5.) Edit the rdd.c file again and change the DB
6.) Make again
7.) Import by using ./rdd "save.rdd" -o insert -s "IP" -p"Port"
I know this is old, but for those of you coming here form Google:
I just published a command line interface utility to npm and github that allows you to copy keys that match a given pattern (even *) from one Redis database to another.
You can find the utility here:
https://www.npmjs.com/package/redis-utils-cli
Try using dump to first dump all the keys and then restore the same
If migrating keys inside of the same redis engine, then you might use internal command MOVE for that (pipelining for more speed):
#!/bin/bash
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=4
target_db=0
total=$(redis-cli -n 4 keys \* | sed 's/^/MOVE /g' | sed 's/$/ '$target_db'/g' | wc -c)
#copy all keys without preserving ttl!
time redis-cli -h $source_host -p $source_port -n $source_db keys \* | \
sed 's/^/MOVE /g' | sed 's/$/ 0/g' | \
pv -s $total | \
redis-cli -h $source_host -p $source_port -n $source_db >/dev/null

sudo useradd wont make home directory

I have an automatic script which works, only it just never makes a home directory. The data is extracted from a database.
Heres the script:
$SQL -s -e "SELECT uid, password FROM registrations WHERE processed = 0" \
| while read A B; do
sudo useradd $A -p $B -m /home/
as you can see the -m is there, but it seems to ignore it and never make a home directory and I have no idea why. I must be missing something but i've no idea what
If you run man useradd you'll see that the -m does not expect a parameter.
Running it this way should do the trick (or at least it just did on my Debian Squeeze):
useradd $A -p $B -m
In the man pages you'll also find other useful options such as: -d or -b