how to match gre key using? - iptables
I want to match on the gre tunnel key (5) using iptables, my command is
below:
iptables -A OUTPUT -t raw -p gre -o eth2 -m conntrack --ctrepldstport 5 -j LOG --log-level debug
However, this is not working. Could anyone help point out where is wrong?
root#promg-2n-a-dhcp85:~/openvswitch# iptables --version
iptables v1.4.12
Thanks,
http://www.gossamer-threads.com/lists/iptables/devel/66339
While porting some changes of the 2.6.21-rc7 pptp/proto_gre conntrack
and nat modules to a 2.4.32 kernel I noticed that the gre_key function
returns a wrong pointer to the GRE key of a version 0 packet thus corrupting
the packet payload.
The intended behaviour for GREv0 packets is to act like
nf_conntrack_proto_generic/nf_nat_proto_unknown so I have ripped the
offending functions (not used anymore) and modified the xx_nat_proto_gre
modules to not touch version 0 (non PPTP) packets."
so nice way of fixing problems :-(
seems this patch was accepted silently, and matching by gre keys will newer work again in linux, contrary to what proclaimed in iptables man.
Shameless self advertising of one of my OSS modules. A while ago I wrote a custom IPTables module "xt_bfl4" to solve a IPv6 matching problem, it also works in this case.
Use the BPF expression below to match for a key of 0x917e805a
udp[0:1]&0x20=0x20 and ((udp[0:1]&0xA0=0x20 and udp[4:4]=0x917e805a) or (udp[0:1]&0xA0=0xA0 and udp[16:4]=0x917e805a))
This compiles to:
(000) ldb [0]
(001) and #0x20
(002) jeq #0x20 jt 3 jf 12
(003) ldb [0]
(004) and #0xa0
(005) jeq #0x20 jt 6 jf 8
(006) ld [4]
(007) jeq #0x917e805a jt 11 jf 12
(008) jeq #0xa0 jt 9 jf 12
(009) ld [16]
(010) jeq #0x917e805a jt 11 jf 12
(011) ret #65535
(012) ret #0
or in the format required by xt_bpf & xt_bpfl4:
13,48 0 0 0,84 0 0 32,21 0 9 32,48 0 0 0,84 0 0 160,21 0 2 32,32 0 0 4,21 3 4 2440986714,21 0 3 160,32 0 0 16,21 0 1 2440986714,6 0 0 65535,6 0 0 0
And so match with the following rule:
iptables -I INPUT -p 47 -m bpfl4 --bytecodel4 '13,48 0 0 0,84 0 0 32,21 0 9 32,48 0 0 0,84 0 0 160,21 0 2 32,32 0 0 4,21 3 4 2440986714,21 0 3 160,32 0 0 16,21 0 1 2440986714,6 0 0 65535,6 0 0 0'
Related
What is the most minimalistic way to run a program inside an OS?
Generally, when I start a program from bash, it forks bash and inherits many things from it, like stdin, stdout. Is there some other way to run a program, with no such setup? Maybe it explicitly opens fd 1, writes something and closes it? I came across nohup and disown. But both of those detaches a running process from bash, but initially still the process inherits from bash. Maybe is there a way to start a process that inherits from nothing? I am asking this just out of curiosity, and have no practical purposes. When a program is ran in a microcontroller, it is just our program that is running with no additional setup (if setup is required, user have to prepend it). Similarly, is there a way even in the presence of an operating system, to run just what is programmed, without any setups?
I assume you are using Linux. Printing top -u root you see for eg on my system (Ubuntu 20.04 x86_64): PID PPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 0 root 20 0 170380 11368 6500 S 0.0 0.0 0:23.52 systemd 2 0 root 20 0 0 0 0 S 0.0 0.0 0:00.61 kthreadd 3 2 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp 4 2 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp 5 2 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 netns 10 2 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq 11 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_tasks_rude_ 12 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_tasks_trace 13 2 root 20 0 0 0 0 S 0.0 0.0 0:04.48 ksoftirqd/0 14 2 root 20 0 0 0 0 I 0.0 0.0 1:28.82 rcu_sched You see that all processes inherit ultimately from PPID (parent process ID) zero. This process is not a process, it represents the Linux scheduler. Then systemd (PID 1) is launched by the kernel and every other process in the system is launched by systemd. At a user level top -u madfred 3371 1 madfred 20 0 19928 7664 6136 S 0.0 0.0 0:11.62 systemd 3372 3371 madfred 20 0 170404 2460 0 S 0.0 0.0 0:00.00 (sd-pam) 3379 3371 madfred 39 19 659828 16348 12500 S 0.0 0.0 0:02.38 tracker-miner-f 3402 3371 madfred 20 0 8664 5112 3412 S 0.0 0.0 0:00.94 dbus-daemon 3407 3371 madfred 20 0 239712 6740 6064 S 0.0 0.0 0:00.03 gvfsd There is one user systemd that is launched by the root systemd and runs as the user. This user systemd is in charge of launching every process for that user. That is necessary for assuring all the guarantees that the Linux OS provides as security, memory protection, file resources etc. What you want would be to replace the kernel with something else, which is very possible. Check for example: https://wiki.osdev.org/Bare_bones https://github.com/contiki-ng/contiki-ng It is pretty easy to replace systemd (or the old /sbin/init) with your own custom initializer. Check this answer: Writing my own init executable
Write GeoTIFF File to GRIB2 Using GDAL
I am looking to convert a GeoTIFF file to GRIB2, and define several pieces of metadata manually as seen in the provided literature here. I am using the GDAL library, specifically the script gdal translate. My attempt to convert and pass specific metadata is as follows: gdal_translate -b 1 -mo DISCIPLINE=0 IDS_CENTER=248 IDS_SUBCENTER=4 IDS_MASTER_TABLE=24 IDS_SIGNF_REF_TIME=1 IDS_REF_TIME=2020-07-02T00:00:00Z IDS_PROD_STATUS=0 IDS_TYPE=1 PDS_PDTN=0 PDS_TEMPLATE_NUMBERS="0 4 2 0 96 0 0 0 1 0 0 0 0 103 0 0 0 0 2 255 0 0 0 0 0 7 228 7 2 13 0 0 1 0 0 0 0 2 2 1 0 0 0 1 255 0 0 0 0" PDS_TEMPLATE_ASSEMBLED_VALUES="0 4 2 0 96 0 0 1 0 103 0 2 255 0 0 2020 7 2 13 0 0 1 0 2 2 1 1 255 0" input.tif output.grb2 However, upon executing this command I receive the following error: ERROR 6: Too many command options 'IDS_MASTER_TABLE=24' Potential errors: Not calling the correct subprocess (currently using -mo) when attempting to pass the metadata, all metadata pairs must be encased in quotation marks, etc. Any help would be greatly appreciated!
You need to add an -mo flag for every metadata. Your command would become: $ gdal_translate -b 1 \ -mo DISCIPLINE=0 \ -mo IDS_CENTER=248 \ # etc. input.tif output.grb2
Print blank space in a table with awk
I have a pattern that looks something like this. No Type Pid Status Cause Start Rstr Err Sem Time Program Cl User Action Table ------------------------------------------------------------------------------------------------------------------------------- 0 DIA 10897 Wait yes no 0 0 0 NO_ACTION 1 DIA 10903 Wait yes no 0 0 0 NO_ACTION 2 DIA 10909 Wait yes no 0 0 0 NO_ACTION 3 DIA 10916 Wait yes no 0 0 0 NO_ACTION 4 DIA 10917 Wait yes no 0 0 0 NO_ACTION 5 DIA 9061 Wait yes no 1 0 0 NO_ACTION 6 DIA 10919 Wait yes no 0 0 0 NO_ACTION 7 DIA 10920 Wait yes no 0 0 0 NO_ACTION 8 UPD 10921 Wait yes no 0 0 0 NO_ACTION 9 BTC 24376 Wait yes no 0 0 0 NO_ACTION 10 BTC 25651 Wait yes no 1 0 0 NO_ACTION 11 BTC 25361 Wait yes no 0 0 0 NO_ACTION 12 BTC 15201 Wait yes no 0 0 0 NO_ACTION 13 BTC 5241 Wait yes no 0 0 0 NO_ACTION 14 BTC 23572 Wait yes no 0 0 0 NO_ACTION 15 BTC 8603 Wait yes no 0 0 0 NO_ACTION 16 BTC 1418 Wait yes no 0 0 0 NO_ACTION 17 BTC 18127 Wait yes no 1 0 0 NO_ACTION 18 BTC 14780 Wait yes no 0 0 0 NO_ACTION 19 BTC 18234 Wait yes no 0 0 0 NO_ACTION 20 BTC 14856 Wait yes no 0 0 0 NO_ACTION 21 SPO 10934 Wait yes no 0 0 0 NO_ACTION 22 UP2 10939 Wait yes no 0 0 0 NO_ACTION Now I am using awk to convert it something like below NO=0,Type=DIA,Pid=10897,Status=Wait,Cause=yes,Start=no,Rstr=0,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table= above is the sample for one line would be the same for all lines. we are removing the column header via sed command at runtime, now when we use awk it misses space between status and cause and write the value which should be with start at the cause. we are using the below command. awk 'BEGIN{FS=" ";OFS=","}{print "NO="$1,"Type="$2,"Pid="$3,"Status="$4,"Cause="$5,"Start="$6,"Rstr="$7,"Err="$8,"Sem="$9,"Time="$10,"Program="$11,"Cl="$12,"User="$13,"Action="$14,"Table="$15;}' we want the output to be like this NO=0,Type=DIA,Pid=10897,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= and one more thing to add these blank fields will have some values from time to time.
This may do: awk 'NR==1 {for (i=1;i<=NF;i++) a[i]=$i;c=NF;next} NR>2 {for (i=1;i<=c;i++) printf "%s=%s,",a[i],$i;print ""}' file No=0,Type=DIA,Pid=10897,Status=Wait,Cause=yes,Start=no,Rstr=0,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, No=1,Type=DIA,Pid=10903,Status=Wait,Cause=yes,Start=no,Rstr=0,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, No=2,Type=DIA,Pid=10909,Status=Wait,Cause=yes,Start=no,Rstr=0,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, No=3,Type=DIA,Pid=10916,Status=Wait,Cause=yes,Start=no,Rstr=0,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, No=4,Type=DIA,Pid=10917,Status=Wait,Cause=yes,Start=no,Rstr=0,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, No=5,Type=DIA,Pid=9061,Status=Wait,Cause=yes,Start=no,Rstr=1,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, No=6,Type=DIA,Pid=10919,Status=Wait,Cause=yes,Start=no,Rstr=0,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, No=7,Type=DIA,Pid=10920,Status=Wait,Cause=yes,Start=no,Rstr=0,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, No=8,Type=UPD,Pid=10921,Status=Wait,Cause=yes,Start=no,Rstr=0,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, No=9,Type=BTC,Pid=24376,Status=Wait,Cause=yes,Start=no,Rstr=0,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, No=10,Type=BTC,Pid=25651,Status=Wait,Cause=yes,Start=no,Rstr=1,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, No=11,Type=BTC,Pid=25361,Status=Wait,Cause=yes,Start=no,Rstr=0,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, No=12,Type=BTC,Pid=15201,Status=Wait,Cause=yes,Start=no,Rstr=0,Err=0,Sem=0,Time=NO_ACTION,Program=,Cl=,User=,Action=,Table=, NO_ACTION is hard to handle but can be done using fixed file width FIELDWIDTHS="3 3 3 3 3 3 3 3". But since header are not aligned with data, it may be hard in a simple command.
There is no clear information on how your data looks like. We do not know if your data is tab-delimited (which would be nice) or only space delimited. If it is space-delimited, as the example gives, than it is hard to distinguish empty columns. The only way I can see to distinguish empty columns is by assuming that the header of the input file, is aligned with the corresponding column, so we can use this to our advantage. The following solution is for GNU awk 4.2 or higher Have a file convert.awk which contains the following content: BEGIN{ OFS="," } # Read header and find the starting index of each column # and the corresponding length # We assume that the headers are uniquely defined. (FNR==1) { h[1]=$1; l=1 for (i=2;i<=NF;++i) { h[i]=$i; t=index($0,$i); f=f " "(t-l); l=t } n=NF; FIELDWIDTHS = f " *" next } # skip ruler /^[-]+$/ { next } # print record { for (i=1;i<=n;++i) { t=(i>NF ? "" : $i); gsub("(^ *| *$)","",t) printf "%s%s=%s",(i==1?"":OFS),h[i],t } printf ORS } and run with: $ awk -f convert.awk input > output This outputs: No=0,Type=DIA,Pid=10897,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=1,Type=DIA,Pid=10903,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=2,Type=DIA,Pid=10909,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=3,Type=DIA,Pid=10916,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=4,Type=DIA,Pid=10917,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=5,Type=DIA,Pid=9061,Status=Wait,Cause=,Start=yes,Rstr=no,Err=1,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=6,Type=DIA,Pid=10919,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=7,Type=DIA,Pid=10920,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=8,Type=UPD,Pid=10921,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=9,Type=BTC,Pid=24376,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=10,Type=BTC,Pid=25651,Status=Wait,Cause=,Start=yes,Rstr=no,Err=1,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=11,Type=BTC,Pid=25361,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=12,Type=BTC,Pid=15201,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=13,Type=BTC,Pid=5241,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=14,Type=BTC,Pid=23572,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=15,Type=BTC,Pid=8603,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=16,Type=BTC,Pid=1418,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=17,Type=BTC,Pid=18127,Status=Wait,Cause=,Start=yes,Rstr=no,Err=1,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=18,Type=BTC,Pid=14780,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=19,Type=BTC,Pid=18234,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=20,Type=BTC,Pid=14856,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=21,Type=SPO,Pid=10934,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table= No=22,Type=UP2,Pid=10939,Status=Wait,Cause=,Start=yes,Rstr=no,Err=0,Sem=0,Time=0,Program=,Cl=,User=,Action=NO_ACTION,Table=
To One-Hot encode or not to One-Hot encode
My data set has the day of the week number (Mon = 1, Tue = 2, Wed = 3 ...) My data look like this WeekDay Col1 Col2 Target 1 2.2 8 126 6 3.5 4 354 1 8.0 2 322 3 7.2 4 465 7 3.2 5 404 6 3.8 3 134 1 3.6 5 455 1 5.5 8 345 6 7.0 6 442 Shall I one-hot encode WeekDay so it will look like this ? WeekDay Col1 Col2 Target Mo Tu We Th Fr Sa Su 1 2.2 8 126 1 0 0 0 0 0 0 6 3.5 4 354 0 0 0 0 0 1 0 1 8.0 2 322 1 0 0 0 0 0 0 3 7.2 4 465 0 0 1 0 0 0 0 7 3.2 5 404 0 0 0 0 0 0 1 6 3.8 3 134 0 0 0 0 0 1 0 1 3.6 5 455 1 0 0 0 0 0 0 1 5.5 8 345 1 0 0 0 0 0 0 6 7.0 6 442 0 0 0 0 0 1 0 I am going to use Random Forest
You should not use one hot encoding since you are using a random forest model. An RF model will be able to find the patterns from label encoding as well and generally RF models perform worse with one hot encoding as they might decide to lost a few days when creating a tree. Also one hot encoding introduces the curse of dimensionality in your data, which is never good. One hot encoding is better in cases of methods like linear regression or logistic regression, where 1 i.e. Monday might get more importance then 6 i.e. Saturday as these models have a multiplication model on the backend.
Generally, it's preferable to use One-Hot-Encoding, before use Random Forest. If this is only a categorical variable in your dataset then go for One-hot-Encoding. If you use R's random forest then as I know R's library deal with it itself. For scikit-learn that's not the case and you have to one-hot encode yourself. There is a trade off. One-Hot encoding introduces sparsity which is undesirable for tree-based models if the cardinality of the categorical variable is big, or in other words, there are many unique values in the categorical variable. However, Python's catboost deals with categorical variables.
NFS network traffic with auto_direct
I'm interested in how the NFS network traffic goes when there is a redirect on the server side. E.g.: the client accesses dir_a , mounted on the NFS server_a, but on server_a , /etc/auto_direct contains an entry that redirects dir_a to dir_b on server_b. In this case, which server will the NFS client communicate with ? The most important question is, between which machines will the bulk of the NFS data traffic take place ? All this is for Solaris 10, if that matters.
I've made some tests and from that it seems that the client somehow knows about the redirect: user#client $ df dir_a dir_a(auto_direct ): 0 blocks 0 files I've made some file access in dir_a and watched the interfaces of the client towards server_a and server_b. On the client I did: cd dir_a; while true; do echo 1111111111111111111111111111 >> t; done On the client's interface to server_a there was no traffic increase (only in the total traffic): (the time when the above script loop was running is marked with * below.) nmsadm#atrcxb1951: netstat -I bnxe0 10 input bnxe0 output input (Total) output packets errs packets errs colls packets errs packets errs colls 8819 0 4476 0 0 8920 0 4494 0 0 8800 0 4451 0 0 8871 0 4466 0 0 8753 0 4371 0 0 27468 0 26777 0 0 * 8704 0 4378 0 0 27772 0 27227 0 0 * 8734 0 4381 0 0 28425 0 28044 0 0 * 8789 0 4453 0 0 13053 0 9317 0 0 8765 0 4407 0 0 8871 0 4420 0 0 While on the client's interface towards server_b there was: nmsadm#atrcxb1951:~$ netstat -I bnxe4 10 input bnxe4 output input (Total) output packets errs packets errs colls packets errs packets errs colls 121 0 17 0 0 8942 0 4494 0 0 10467 0 12473 0 0 19264 0 16927 0 0 * 18579 0 22362 0 0 27291 0 26732 0 0 * 21735 0 25978 0 0 30466 0 30364 0 0 * 10971 0 12970 0 0 19760 0 17395 0 0 * 35 0 12 0 0 8782 0 4432 0 0 So in my case it seems that the client handles the redirection and server_a is not proxying the NFS data traffic. I'd be still curious about under what circumstances does this work like this. Any configuration option, etc.