Meaning of the number sent with Netconf payload - ssh

I am using confd based Netconf agent. When I checked the XML payload received by the agent, I see a number prefixed in the payload. It is not message-id. What is this prefix? Please give any RFC reference which explains the prefix.
For example, "#164" is prefixed with the get-config payload.
#164
<?xml version=\"1.0\" encoding=\"UTF-8\"?>
<rpc message-id=\"0\"
xmlns=\"urn:ietf:params:xml:ns:netconf:base:1.0\">
<get-config>
<source>
<running/>
</source>
</get-config>
</rpc>
Similarly, different prefixes are used for the other Netconf operation as listed below.
get 118
close-session 128
lock 154
unlock 158
delete-config 172
edit-config 190
copy-config 193

NETCONF over SSH uses a special form of chunked framing for the exchanged XML stanzas. Essentially, it's a way for the parser at the other side to see when an individual message ends.

The #164 you are seeing is a part of NETCONF 1.1 over SSH chunked framing mechanism.
RFC6242, Section 4.2, Chunked Framing Mechanism
As an example, the message:
<rpc message-id="102"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<close-session/>
</rpc>
could be encoded as (using '\n' as a visible representation of the
LineFeed character):
C: \n#4\n
C: <rpc
C: \n#18\n
C: message-id="102"\n
C: \n#79\n
C: xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">\n
C: <close-session/>\n
C: </rpc>
C: \n##\n
The length of bytes in your example matches if you strip away all formatting chars:
<?xml version="1.0" encoding="UTF-8"?><rpc message-id="0" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"><get-config><source><running/></source></get-config></rpc>

Related

Trying to understand SSLKEYLOGFILE environment variable output format

I have been messing around with the SSLKEYLOGFILE environment variable, and I am trying to understand what everything inside the output that it gives me (the .log file with all the session keys).
Here is a picture of what the output looks like:
I understand that these are keys, but what I notice is a space in the middle of each line, indicating to me that they are separate keys. What exactly are the 2 different keys that they are giving me, and how is WireShark able to use this file to decrypt ssl traffic?
The answer to your question is in a comment from the commmit that added this feature:
* - "CLIENT_RANDOM xxxx yyyy"
* Where xxxx is the client_random from the ClientHello (hex-encoded)
* Where yyy is the cleartext master secret (hex-encoded)
* (This format allows non-RSA SSL connections to be decrypted, i.e.
* ECDHE-RSA.)

Why received ZFS dataset uses less space than original?

I have a dataset on the server1 that I want to back up to the second server2.
Server1 (original):
zfs list -o name,used,avail,refer,creation,usedds,usedsnap,origin,compression,compressratio,refcompressratio,mounted,atime,lused storage/iscsi/webhost-old produces:
NAME USED AVAIL REFER CREATION USEDDS USEDSNAP ORIGIN COMPRESS RATIO REFRATIO MOUNTED ATIME LUSED
storage/iscsi/webhost-old 67,8G 1,87T 67,8G Út kvě 31 6:54 2016 67,8G 16K - lz4 1.00x 1.00x - - 67,4G
Sending volume to the 2nd server:
zfs send storage/iscsi/webhost-old | pv | ssh -c arcfour,aes128-gcm#openssh.com root#10.0.0.2 zfs receive -Fduv pool/bkp-storage
received 69,6GB stream in 378 seconds (189MB/sec)
Server2 zfs list produces:
NAME USED AVAIL REFER CREATION USEDDS USEDSNAP ORIGIN COMPRESS RATIO REFRATIO MOUNTED ATIME LUSED
pool/bkp-storage/iscsi/webhost-old 36,1G 3,01T 36,1G Pá pro 29 10:25 2017 36,1G 0 - lz4 1.15x 1.15x - - 28,4G
Why is there such a difference in sizes? Thanks.
From what you posted, I noticed 3 things that seemed odd:
the compressratio is 1.15x on system 2, but 1.00x on system 1
on system 2, used is 1.27x higher than logicalused
the logicalused and the number zfs receive report are ~2.3x higher on system 1 than system 2
These terms are all defined in the man page, but are still confusing to reverse-engineer explanations for in practice.
(1) could happen if you enabled compression on the source dataset after you wrote all the data to it, since ZFS doesn't rewrite the data to compress it when you enable that setting. The data sent by zfs send is uncompressed unless you use -c, but system 2 will try to compress it as it runs zfs receive if the setting is enabled on the destination dataset. If both system 1 and system 2 had the same compression settings before the data was written, they would have the same compressratio as well.
(2) can happen due to metadata written along with your data, but in this case it's too high for "normal" metadata, which accounts for 1-2% of most pools. It's probably caused by a pool-wide setting, like configuring RAID-Z, or a weird combination of striping and mirroring (like 4 stripes, but with one of them being a mirror).
For (3), I re-read the man page to try to figure it out:
logicalused
The amount of space that is "logically" consumed by this dataset and
all its descendents. See the used property. The logical space
ignores the effect of the compression and copies properties, giving a
quantity closer to the amount of data that applications see.
If you were sending a dataset (instead of a single iSCSI volume) and the send size matched system 2's logicalused value (instead of system 1's), I would guess you forgot to send some child datasets (i.e. by using zfs send -R). However, neither of those are true in this case.
I had to do some additional digging -- this blog post from 2005 might contain the explanation. If system 1 didn't have compression enabled when the data was written (like I guessed above for (1)), the function responsible for not writing zeroed-out blocks (zio_compress_data) would not be run, so you probably have a bunch of empty blocks written to disk, and accounted for in the logicalused size. However, since lz4 is configured on system 2, it would run there, and those blocks would not be counted.

How can I use the value of mp2t.af.pcr as a Tshark field?

I have a wireshark capture that contains an RTP multicast stream (plus some other incidental data).
Using a Tshark command like the following, I can produce a CSV of the RTP timestamp compared with the packet capture time:
tshark.exe -r "capture.pcap" -Eseparator=, -Tfields -e rtp.timestamp -e frame.time_epoch -d udp.port==5000,rtp
This decodes the UDP packets as RTP, and successfully prints out the two fields as expected.
Now, my question: The payload of the RTP stream is an MPEG2 Transport Stream, and I also want to print the PCR value (if there is one) alongside the packet and RTP timestamps.
In wireshark, I can see the PCR being decoded correctly, however using a command like the following:
tshark.exe -r "HBO HD CZ.pcap" -Eseparator=,-Tfields -e rtp.timestamp -e frame.time_epoch -e mp2t.af.pcr -d udp.port==5000,mp2t
...only prints out a "1" if there is a PCR oresent, not the actual value. I have also checked the .pcr_flag to confirm that these two are not exchanged, but still I see the same result.
The documentation seems to call mp2t.af.pcr a "Label", does this mean that Tshark is not able to use it as a field? Is there a way to generate a CSV with these values?
(What part of the documentation calls it a "Label"? That's a somewhat odd description of a named field.)
The problem is that the value that Wireshark displays after "base(XXX)*300 + ext(YYY)" is calculated and displayed, but the field itself isn't given an integral type and is instead given a type that doesn't have a value. Arguably, it should be an FT_UINT64 field and should be given a value, so that you can filter on it and can print the value in TShark.
Please file an enhancement request for this on the Wireshark Bugzilla.

size of ICMP type 11 packet payload

What's the size of the ICMP packet payload when the type is 11, i.e. time exceeded?
Since it contains an IP header and the first 8 Bytes of the IP packet payload generating the ICMP message, I thought its size was 20 + 8 = 28.
I'm replaying some common user traffic with TTL=1. In the ICMP messages I have dumped I noticed that:
all ICMP packets generated by UDP packets have payload of size 28 Bytes
all those generated by TCP packets have payload of size 40 Bytes
Since I need to match ICMP time-exceeded messages with the packets that triggered them by comparing those bytes, this piece of information is essential, but I can't find figure out why this happens.
The problem is that you're quoting the 8-byte header payload from RFC 792, Page 4, but the requirements were changed by RFC 1812...
Time Exceeded Message (in RFC 792)
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | Code | Checksum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| unused |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Internet Header + 64 bits of Original Data Datagram |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
RFC 1812, Section 4.3.2.3 dramatically increases the allowable payload in an ICMP Error message (emphasis mine):
4.3.2.3 Original Message Header
Historically, every ICMP error message has included the Internet
header and at least the first 8 data bytes of the datagram that
triggered the error. This is no longer adequate, due to the use of
IP-in-IP tunneling and other technologies. Therefore, the ICMP
datagram SHOULD contain as much of the original datagram as possible
without the length of the ICMP datagram exceeding 576 bytes. The
returned IP header (and user data) MUST be identical to that which
was received, except that the router is not required to undo any
modifications to the IP header that are normally performed in
forwarding that were performed before the error was detected (e.g.,
decrementing the TTL, or updating options).
The ICMP Errors you're generating from Scapy packets should contain all the information from the IP and TCP layers of the original packet.
As you noted, the ICMP payload is the IP header plus 8 octets of the original packet's payload. IP headers, however, are not always 20 octets long; 20 is only the minimum. The IP header itself may contain options, and the header length is indicated by the value in the IHL field of the header. See sec 3.1 of RFC 791. So it looks like the TCP packets have 12 additional octets of options in their IP headers. RFC 791 defines some standard options such as source routing and timestamping. You'll have to decode the header to determine what options are being used.
I would like to add for future reference that not only do ICMP payloads vary in size as Mike said, they might also be longer than 128 Bytes in the case of ICMP extensions for MPLS. See this draft for more information

Showing a long running shell process with Apache

I have a CGI script which takes about 1 minute to run. Right now Apache only returns results to the browser once the process has finished.
How can I make it show the output like it was run on a terminal?
Here is a example which demonstrates the problem.
I want to see the numbers 1 to 5 appear as they are printed.
I had to disable mod_deflate to have chunk mode working with apache
I did not find another way for my cgi to disable auto encoding to gzip.
There are several factors at play here. To eliminate a few issues, Apache and bash are not buffering any of the output. You can verify with this script:
#!/bin/sh
cat <<END
Content-Type: text/plain
END
for i in $(seq 1 10)
do
echo $i
sleep 1
done
Stick this somewhere that Apache is configured to execute CGI scripts, and test with netcat:
$ nc localhost 80
GET /cgi-bin/chunkit.cgi HTTP/1.1
Host: localhost
HTTP/1.1 200 OK
Date: Tue, 24 Aug 2010 23:26:24 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.7l DAV/2
Transfer-Encoding: chunked
Content-Type: text/plain
2
1
2
2
2
3
2
4
2
5
2
6
2
7
2
8
2
9
3
10
0
When I do this, I see in netcat each number appearing once per second, as intended.
Note that my version of Apache, at least, applies the chunked transfer encoding automatically, presumably because I didn't include a Content-Length; if you return the Transfer-Encoding: chunked header yourself, then you need to encode the output of your script in the chunked transfer encoding. That's pretty easy, even in a shell script:
chunk () {
printf '%x\r\n' "${#1}" # Length of the chunk in hex, CRLF
printf '%s\r\n' "$1" # Chunk itself, CRLF
}
chunk $'1\n' # This is a Bash-ism, since it's pretty hard to get a newline
chunk $'2\n' # character portably.
However, serve this to a browser, and you'll get varying results depending on the browser. On my system, Mac OS X 10.5.8, I see different behaviors between my browsers. In Safari, Chrome, and Firefox 4 beta, I don't start seeing output until I've sent somewhere around 1000 characters (I would guess 1024 including the headers, or something like that, but I haven't narrowed it down to the exact behavior). In Firefox 3.6, it starts displaying immediately.
I would guess that this delay is due to content type sniffing, or character encoding sniffing, which are in the process of being standardized. I have tried to see if I could get around the delay by specifying proper content types and character encodings, but without luck. You may have to send some padding data (which would be pretty easy to do invisibly if you use HTML instead of plain text), to get beyond that initial buffer.
Once you start streaming HTML instead of plain text, the structure of your HTML matters too. Some content can be displayed progressively, while some cannot. For instance, streaming down <div>s into the body, with no styling, works fine, and can display progressively as it arrives. If you try to open a <pre> tag, and just stream content into that, Webkit based browsers will wait until they see the close tag to try to lay that out, while Firefox is happy to display it progressively. I don't know all of the corner cases; you'll have to experiment to see what works for you.
Anyhow, I hope this helps you get started. Let me know if you have any more questions!