NFS network traffic with auto_direct - nfs

I'm interested in how the NFS network traffic goes when there is a redirect on the server side.
E.g.: the client accesses dir_a , mounted on the NFS server_a, but on server_a , /etc/auto_direct contains an entry that redirects dir_a to dir_b on server_b.
In this case, which server will the NFS client communicate with ? The most important question is, between which machines will the bulk of the NFS data traffic take place ?
All this is for Solaris 10, if that matters.

I've made some tests and from that it seems that the client somehow knows about the redirect:
user#client $ df dir_a
dir_a(auto_direct ): 0 blocks 0 files
I've made some file access in dir_a and watched the interfaces of the client towards server_a and server_b.
On the client I did:
cd dir_a; while true; do echo 1111111111111111111111111111 >> t; done
On the client's interface to server_a there was no traffic increase (only in the total traffic): (the time when the above script loop was running is marked with * below.)
nmsadm#atrcxb1951: netstat -I bnxe0 10
input bnxe0 output input (Total) output
packets errs packets errs colls packets errs packets errs colls
8819 0 4476 0 0 8920 0 4494 0 0
8800 0 4451 0 0 8871 0 4466 0 0
8753 0 4371 0 0 27468 0 26777 0 0 *
8704 0 4378 0 0 27772 0 27227 0 0 *
8734 0 4381 0 0 28425 0 28044 0 0 *
8789 0 4453 0 0 13053 0 9317 0 0
8765 0 4407 0 0 8871 0 4420 0 0
While on the client's interface towards server_b there was:
nmsadm#atrcxb1951:~$ netstat -I bnxe4 10
input bnxe4 output input (Total) output
packets errs packets errs colls packets errs packets errs colls
121 0 17 0 0 8942 0 4494 0 0
10467 0 12473 0 0 19264 0 16927 0 0 *
18579 0 22362 0 0 27291 0 26732 0 0 *
21735 0 25978 0 0 30466 0 30364 0 0 *
10971 0 12970 0 0 19760 0 17395 0 0 *
35 0 12 0 0 8782 0 4432 0 0
So in my case it seems that the client handles the redirection and server_a is not proxying the NFS data traffic.
I'd be still curious about under what circumstances does this work like this. Any configuration option, etc.

Related

A problem about the difference of SHA-1 logical functions between wikipedia and FIPS 180-4

wikipedia
standard manual
when calculating the SHA-1, we need a sequence of logical functions, f0, f1,…, f79,
I noticed that the function definitions in Wikipedia and the standard manual are different.
oddly, when I chose the ones in the standard manual, the SHA-1 result went wrong.
I used online sha-1 calculators and found that everyone uses the functions written in wikipedia.
Why?
Here are the truth tables for both versions of 'choose' (0..19) and 'majority' (40..59) (for 'parity' 20..39 and 60..79 both sources use xor). Please identify the rows for which the ior result is different from the xor result; those are the cases for which the two formulas produce different results.
x
y
z
x^y
¬x^z
ior
xor
0
0
0
0
0
0
0
0
0
1
0
1
1
1
0
1
0
0
0
0
0
0
1
1
0
1
1
1
1
0
0
0
0
0
0
1
0
1
0
0
0
0
1
1
0
1
0
1
1
1
1
1
1
0
1
1
x
y
z
x^y
x^z
y^z
ior
xor
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
1
0
0
1
1
1
1
0
0
0
0
0
0
0
1
0
1
0
1
0
1
1
1
1
0
1
0
0
1
1
1
1
1
1
1
1
1
1
Hint: there are no differences. The results are always the same, and it doesn't matter which formula you use, as long as you do it correctly you get the correct result.
In fact, on checking my copy of 180-4 this is even stated in section 4.1, immediately above the section you quoted:
... Each of the algorithms [for SHA-1, SHA-256 group, and SHA-512 group] include Ch(x, y, z)
and Maj(x, y, z) functions; the exclusive-OR operation (⊕ ) in these functions may be replaced
by a bitwise OR operation (∨) and produce identical results.
If something you did 'went wrong', it's because you did something wrong, but nobody here is psychic so we have absolutely no idea at all what you did wrong.

Write GeoTIFF File to GRIB2 Using GDAL

I am looking to convert a GeoTIFF file to GRIB2, and define several pieces of metadata manually as seen in the provided literature here. I am using the GDAL library, specifically the script gdal translate.
My attempt to convert and pass specific metadata is as follows:
gdal_translate -b 1 -mo DISCIPLINE=0 IDS_CENTER=248 IDS_SUBCENTER=4 IDS_MASTER_TABLE=24 IDS_SIGNF_REF_TIME=1 IDS_REF_TIME=2020-07-02T00:00:00Z IDS_PROD_STATUS=0 IDS_TYPE=1 PDS_PDTN=0 PDS_TEMPLATE_NUMBERS="0 4 2 0 96 0 0 0 1 0 0 0 0 103 0 0 0 0 2 255 0 0 0 0 0 7 228 7 2 13 0 0 1 0 0 0 0 2 2 1 0 0 0 1 255 0 0 0 0" PDS_TEMPLATE_ASSEMBLED_VALUES="0 4 2 0 96 0 0 1 0 103 0 2 255 0 0 2020 7 2 13 0 0 1 0 2 2 1 1 255 0" input.tif output.grb2
However, upon executing this command I receive the following error:
ERROR 6: Too many command options 'IDS_MASTER_TABLE=24'
Potential errors: Not calling the correct subprocess (currently using -mo) when attempting to pass the metadata, all metadata pairs must be encased in quotation marks, etc.
Any help would be greatly appreciated!
You need to add an -mo flag for every metadata. Your command would become:
$ gdal_translate -b 1 \
-mo DISCIPLINE=0 \
-mo IDS_CENTER=248 \
# etc.
input.tif output.grb2

How to use a list of categories that example belongs to as a feature solving classification problem?

One of features looks like this:
1 170,169,205,174,173,246,247,249,380,377,383,38...
2 448,104,239,277,276,99,154,155,76,412,139,333,...
3 268,422,419,124,1,17,431,343,341,435,130,331,5...
4 50,53,449,106,279,420,161,74,123,364,231,18,23...
5 170,169,205,174,173,246,247,249,380,377,383,38...
It tells us what categories the example belongs to.
How should I use it while solving classification problem?
I've tried to use dummy variables,
df=df.join(features['cat'].str.get_dummies(',').add_prefix('contains_'))
but we don't know where there are some other categories that were not mentioned in the training set, so, I do not know how to preprocess all the objects.
That's interesting. I didn't know str.get_dummies, but maybe I can help you with the rest.
You basically have two problems:
The set of categories you get later contains categories that were unknown while training the model. You have to get rid of these later.
The set of categories you get later does not contain all categories. You have to make sure, you generate dummies for them as well.
Problem 1: filtering out unknown/unwanted categories
The first problem is easy to solve:
# create a set of all categories, you want to allow
# either definie it as a fixed set, or extract it from your
# column like this (the output of the map is actually irrelevant)
# the result will be in valid_categories
valid_categories= set()
df['categories'].str.split(',').map(valid_categories.update)
# now if you want to normalize your data before you do the
# dummy encoding, you can cleanse the data by
# splitting it, creating an intersection and then joining
# it back again to get a string on which you can work with
# str.get_dummies
df['categories'].str.split(',').map(lambda l: valid_categories.intersection(l)).str.join(',')
Problem 2: generating dummies for all known categories
The second problem can be solved by just adding a dummy row, that
contains all categories e.g. with df.append just before you
call get_dummies and removing it right after get_dummies.
# e.g. you can do it like this
# get a new index value to
# be able to remove the row later
# (this only works if you have
# a numeric index)
dummy_index= df.index.max()+1
# assign the categories
#
df.loc[dummy_index]= {'id':999, 'categories': ','.join(valid_categories)}
# now do the processing steps
# mentioned in the section above
# then create the dummies
# after that remove the dummy line
# again
df.drop(labels=[dummy_index], inplace=True)
Example:
import io
raw= """id categories
1 170,169,205,174,173,246,247
2 448,104,239,277,276,99,154
3 268,422,419,124,1,17,431,343
4 50,53,449,106,279,420,161,74
5 170,169,205,174,173,246,247"""
df= pd.read_fwf(io.StringIO(raw))
valid_categories= set()
df['categories'].str.split(',').map(valid_categories.update)
# remove 154 and 170 for demonstration purposes
valid_categories.remove('170')
valid_categories.remove('154')
df['categories'].str.split(',').map(lambda l: valid_categories.intersection(l)).str.join(',').str.get_dummies(',')
Out[622]:
1 104 106 124 161 169 17 173 174 205 239 246 247 268 276 277 279 343 419 420 422 431 448 449 50 53 74 99
0 0 0 0 0 0 1 0 1 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 1
2 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 1 1 0 1 1 0 0 0 0 0 0
3 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 1 1 0
4 0 0 0 0 0 1 0 1 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
You can see, that there are not columns for 154 and 170.

WHERE command is not working

In using Influxql, when I try the following command
select "P_askbid_midprice1" from "/HFT/Data_HFT/OrderBook/DCIX_OB" limit 50
I got the following result
name: /HFT/Data_HFT/OrderBook/DCIX_OB
time P_askbid_midprice1
---- ------------------
2015-05-30T00:00:00Z 0
2015-05-30T00:00:01Z 0
2015-05-30T00:00:02Z 0
2015-05-30T00:00:03Z 0
2015-05-30T00:00:04Z 0
2015-05-30T00:00:05Z 0
2015-05-30T00:00:06Z 0
2015-05-30T00:00:07Z 0
2015-05-30T00:00:08Z 0
2015-05-30T00:00:09Z 0
2015-05-30T00:00:10Z 0
2015-05-30T00:00:11Z 0
2015-05-30T00:00:12Z 0
2015-05-30T00:00:13Z 0
2015-05-30T00:00:14Z 0
2015-05-30T00:00:15Z 0
2015-05-30T00:00:16Z 0
2015-05-30T00:00:17Z 0
2015-05-30T00:00:18Z 0
2015-05-30T00:00:19Z 0
2015-05-30T00:00:20Z 0
2015-05-30T00:00:21Z 0
2015-05-30T00:00:22Z 0
2015-05-30T00:00:23Z 0
2015-05-30T00:00:24Z 0
2015-05-30T00:00:25Z 0
2015-05-30T00:00:26Z 0
2015-05-30T00:00:27Z 0
2015-05-30T00:00:28Z 0
2015-05-30T00:00:29Z 0
2015-05-30T00:00:30Z 0
2015-05-30T00:00:31Z 0
2015-05-30T00:00:32Z 0
2015-05-30T00:00:33Z 0
2015-05-30T00:00:34Z 0
2015-05-30T00:00:35Z 0
2015-05-30T00:00:36Z 0
2015-05-30T00:00:37Z 0
2015-05-30T00:00:38Z 0
2015-05-30T00:00:39Z 0
2015-05-30T00:00:40Z 0
But with the command
select "P_askbid_midprice1" from "/HFT/Data_HFT/OrderBook/DCIX_OB" WHERE time > '2016-05-30' and time < '2015-05-31'
I got nothing from that command even if it is pretty similar to the previous one.
What is the problem with that command?
You need to use an or statement instead of an and statement. Time cannot be both "after" May 2016 and "before" May 2015. It has to be one or the other.
select "P_askbid_midprice1"
from "/HFT/Data_HFT/OrderBook/DCIX_OB"
WHERE
time > '2016-05-30'
or time < '2015-05-31'

how to match gre key using?

I want to match on the gre tunnel key (5) using iptables, my command is
below:
iptables -A OUTPUT -t raw -p gre -o eth2 -m conntrack --ctrepldstport 5 -j LOG --log-level debug
However, this is not working. Could anyone help point out where is wrong?
root#promg-2n-a-dhcp85:~/openvswitch# iptables --version
iptables v1.4.12
Thanks,
http://www.gossamer-threads.com/lists/iptables/devel/66339
While porting some changes of the 2.6.21-rc7 pptp/proto_gre conntrack
and nat modules to a 2.4.32 kernel I noticed that the gre_key function
returns a wrong pointer to the GRE key of a version 0 packet thus corrupting
the packet payload.
The intended behaviour for GREv0 packets is to act like
nf_conntrack_proto_generic/nf_nat_proto_unknown so I have ripped the
offending functions (not used anymore) and modified the xx_nat_proto_gre
modules to not touch version 0 (non PPTP) packets."
so nice way of fixing problems :-(
seems this patch was accepted silently, and matching by gre keys will newer work again in linux, contrary to what proclaimed in iptables man.
Shameless self advertising of one of my OSS modules. A while ago I wrote a custom IPTables module "xt_bfl4" to solve a IPv6 matching problem, it also works in this case.
Use the BPF expression below to match for a key of 0x917e805a
udp[0:1]&0x20=0x20 and ((udp[0:1]&0xA0=0x20 and udp[4:4]=0x917e805a) or (udp[0:1]&0xA0=0xA0 and udp[16:4]=0x917e805a))
This compiles to:
(000) ldb [0]
(001) and #0x20
(002) jeq #0x20 jt 3 jf 12
(003) ldb [0]
(004) and #0xa0
(005) jeq #0x20 jt 6 jf 8
(006) ld [4]
(007) jeq #0x917e805a jt 11 jf 12
(008) jeq #0xa0 jt 9 jf 12
(009) ld [16]
(010) jeq #0x917e805a jt 11 jf 12
(011) ret #65535
(012) ret #0
or in the format required by xt_bpf & xt_bpfl4:
13,48 0 0 0,84 0 0 32,21 0 9 32,48 0 0 0,84 0 0 160,21 0 2 32,32 0 0 4,21 3 4 2440986714,21 0 3 160,32 0 0 16,21 0 1 2440986714,6 0 0 65535,6 0 0 0
And so match with the following rule:
iptables -I INPUT -p 47 -m bpfl4 --bytecodel4 '13,48 0 0 0,84 0 0 32,21 0 9 32,48 0 0 0,84 0 0 160,21 0 2 32,32 0 0 4,21 3 4 2440986714,21 0 3 160,32 0 0 16,21 0 1 2440986714,6 0 0 65535,6 0 0 0'