Reading Packet Id from Byte - vb.net

I have a packet that I need to send to a client with an ID of 255. I've had no problems sending packets with IDs of 0, 1, and 2. The ID has to be 255. For some reason, after the translation has happened, both me with my server, and the client, get "63" for any Id greater than 127.
This is the code I am using:
Console.WriteLine(Asc(System.Text.ASCIIEncoding.ASCII.GetString(System.Text.ASCIIEncoding.ASCII.GetBytes(Chr("255")))))
Now, This is an overly complicated version of what the server does. You may consider this a bit unnecessary but the inverse functions performed are for your viewing reasons only.
Where it says "255" is the Packet Id I need sent in the format above. As I said, anything larger than 127 returns "63". Very annoying.
Any help is appreciated.

Taken from here:
ASCIIEncoding corresponds to the Windows code page 20127. Because ASCII is a 7-bit encoding, ASCII characters are limited to the lowest 128 Unicode characters, from U+0000 to U+007F.
So you can't use that technique, because 255 is not a valid ASCII character.

Related

what is Boundary Value Analysis?

In an eCommerce website, the first name text field accepts a maximum of 30 characters, What are the BVA values?
Boundary values are the thresholds where a program behaves differently.
For instance, suppose I am sending messages. Each message must be split into packets of between 1 and 100 bytes. If the message is 100 bytes then one packet should be sent. If the message is 101 bytes then two packets should be sent. There is a boundary between 100 and 101 bytes. Both lengths should be tested.
Bugs typically happen at boundary values because that is where off-by-one errors and similar problems exist. In the message example above, you might find that a 100 byte message gets sent as two packets, or a 101 byte message drops the last byte, because of some subtle bug in the packet logic. This might happen even though shorter or longer messages get sent correctly, which is why it is important to concentrate on boundary values when testing.
Boundary Value Analysis is the process of identifying boundary values, both by inspecting the specification and inspecting the code. Look for conditions on things like length and range. For instance, if a field must be all letters then check values like 'a' (0x61), '`' (backtick, 0x60), 'z' (0x7a) and '{' (0x7b). If a field must not be more than 30 characters, check 30 and 31. And so on.
(Also, don't forget that Unicode is a thing: check non-English and non-Latin characters too).

How to predict size for output of SslEncryptPacket

Usually Win32 API can tell what is the length of output buffer required. One need just pass 0 as buffer length and API returns error BUFFER_TOO_SMALL and number of bytes required.
But it is not the same with SslEncryptPacket. It just returns error about small buffer and that's all.
There is also SslLookupCipherLengths which I suppose should be used for that, but documentation gives no clue about how to calculate output buffer having that info.
Maybe you can tell ? Usually I would reserve + kilobyte , but in my situation I need to know exactly.
You probably already know that in order to go through the TLS/SSL handshake, you repeatedly call SSPI->InitializeSecurityContext (on the client side) or SSPI->AcceptSecurityContext (on the server side).
Once the function returns SEC_E_OK, you should call SSPI->QueryContextAttributes with SECPKG_ATTR_STREAM_SIZES to determine the sizes of the header and trailer. It also tells you the number of SecBuffers to use for the SSPI->EncryptMessage function, and it tells you the maximum size of the message that you can pass to EncryptMessage.
As I understand, the values that are returned may vary depending on the type of encryption that the OS chooses for the connection. I'm not intimately familiar with TLS/SSL but I think it uses 5 bytes for the header, 36 for the footer and 16384 for the maximum message length. You mileage may vary, so that's why you should call QueryContextAttribute(... SECPKG_ATTR_STREAM_SIZES ...).

Erlang binary protocol serialization

I'm currently using Erlang for a big project but i have a question regarding a proper proceeding.
I receive bytes over a tcp socket. The bytes are according to a fixed protocol, the sender is a pyton client. The python client uses class inheritance to create bytes from the objects.
Now i would like to (in Erlang) take the bytes and convert these to their equivelant messages, they all have a common message header.
How can i do this as generic as possible in Erlang?
Kind Regards,
Me
Pattern matching/binary header consumption using Erlang's binary syntax. But you will need to know either exactly what bytes or bits your are expecting to receive, or the field sizes in bytes or bits.
For example, let's say that you are expecting a string of bytes that will either begin with the equivalent of the ASCII strings "PUSH" or "PULL", followed by some other data you will place somewhere. You can create a function head that matches those, and captures the rest to pass on to a function that does "push()" or "pull()" based on the byte header:
operation_type(<<"PUSH", Rest/binary>>) -> push(Rest);
operation_type(<<"PULL", Rest/binary>>) -> pull(Rest).
The bytes after the first four will now be in Rest, leaving you free to interpret whatever subsequent headers or data remain in turn. You could also match on the whole binary:
operation_type(Bin = <<"PUSH", _/binary>>) -> push(Bin);
operation_type(Bin = <<"PULL", _/binary>>) -> pull(Bin).
In this case the "_" variable works like it always does -- you're just checking for the lead, essentially peeking the buffer and passing the whole thing on based on the initial contents.
You could also skip around in it. Say you knew you were going to receive a binary with 4 bytes of fluff at the front, 6 bytes of type data, and then the rest you want to pass on:
filter_thingy(<<_:4/binary, Type:6/binary, Rest/binary>>) ->
% Do stuff with Rest based on Type...
It becomes very natural to split binaries in function headers (whether the data equates to character strings or not), letting the "Rest" fall through to appropriate functions as you go along. If you are receiving Python pickle data or something similar, you would want to write the parsing routine in a recursive way, so that the conclusion of each data type returns you to the top to determine the next type, with an accumulated tree that represents the data read so far.
I only covered 8-bit bytes above, but there is also a pure bitstring syntax, which lets you go as far into the weeds with bits and bytes as you need with the same ease of syntax. Matching is a real lifesaver here.
Hopefully this informed more than confused. Binary syntax in Erlang makes this the most pleasant binary parsing environment in a general programming language I've yet encountered.
http://www.erlang.org/doc/programming_examples/bit_syntax.html

Trying to understand nbits value from stratum protocol

I'm looking at the stratum protocol and I'm having a problem with the nbits value of the mining.notify method. I have trouble calculating it, I assume it's the currency difficulty.
I pull a notify from a dogecoin pool and it returned 1b3cc366 and at the time the difficulty was 1078.52975077.
I'm assuming here that 1b3cc366 should give me 1078.52975077 when converted. But I can't seem to do the conversion right.
I've looked here, here and also tried the .NET function BitConverter.Int64BitsToDouble.
Can someone help me understand what the nbits value signify?
You are right, nbits is current network difficulty.
Difficulty encoding is throughly described here.
Hexadecimal representation like 0x1b3cc366 consists of two parts:
0x1b -- number of bytes in a target
0x3cc366 -- target prefix
This means that valid hash should be less than 0x3cc366000000000000000000000000000000000000000000000000 (it is exactly 0x1b = 27 bytes long).
Floating point representation of difficulty shows how much current target is harder than the one used in the genesis block.
Satoshi decided to use 0x1d00ffff as a difficulty for the genesis block, so the target was
0x00ffff0000000000000000000000000000000000000000000000000000.
And 1078.52975077 is how much current target is greater than the initial one:
$ echo 'ibase=16;FFFF0000000000000000000000000000000000000000000000000000 / 3CC366000000000000000000000000000000000000000000000000' | bc -l
1078.52975077482646448605

Analysis of pgpdump output

I have used pgpdump on an encrypted file (via BouncyCastle) to get more information about it and found several lines about partial start, partial continue and partial end.
So I was wondering what exactly this was describing. Is it some sort of fragmentation of plain text?
Furthermore what does the bit count stand for after the RSA algorithm? In this case it's 1022 bits, but I've seen files with 1023 and 1024bits.
Partial body lengths are pretty well explained by this tumblr post. OpenPGP messages are composed of packets of a given length. Sometimes for large outputs (or in the case of packets from GnuPG, short messages), there will be partial body lengths that specify that another header will show up that tell the reader to continue reading From the post:
A partial body length tells the parser: “I know there are at least N more bytes in this packet. After N more bytes, there will be another header to tell if how many more bytes to read.” The idea being, I guess, that you can encrypt a stream of data as it comes in without having to know when it ends. Maybe you are PGP encrypting a speech, or some off-the-air TV. I don’t know. It can be infinite length — you can just keep throwing more partial body length headers in there, each one can handle up to a gigabyte in length. Every gigabyte it informs the parser: “yeah, there’s more coming!”
So in the case of your screenshot, pgpdump reads 8192 bytes, then encounters another header that says to read another 2048 bytes. after that 2k bytes, it hits another header for 1037 bytes, so on and so forth until the last continue header. 489 bytes after that is the end of the message
The 1022 bits, is the length of the public modulus. It is always going to be close to 1024 (if you have a 1024-bit key) but it can end up being slightly shorter than that given the initial selection of the RSA parameters. They are still called "1024-bit keys" though, even though they are slightly shorter than that.