J8583 LLLLBIN and LLLLVAR produces the different length padding result - kotlin

LLLLVAR and LLLLBIN produces different length produced from the same input.
Tried to pass in the value "6832" into the same IsoMessage object, however, LLLLVAR returns "00046382", while LLLLBIN returns "000836333832".
Sample of the source code as below:
msg.setValue(60, "6832".toByteArray(Charsets.US_ASCII), IsoType.LLLLBIN, 10)//encodes to 000836333832
msg.setValue(60, "6832", IsoType.LLLLVAR, 10) //encodes to 00046382
I though both should return 0004, why are both results different?

When you encode ISO messages as text, the LxBIN fields encode their data in hex, and so the size is double what you'd expect. However, the decoder decodes the hex data and gives you a byte array when parsing.
LxVAR and LxBIN fields only have the same length when the whole message is encoded using binary formatting.

Related

How to convert hex to longi and lati?

Recently I got a GPS to receive data as hexadecimal values and I want to get the data from hex to dec directly.
For example:
Lati: DB0A51FC = 22.3368658
Long: FCBD003B = 114.1758175
Alti: 6A3DFF9E = 16.844
The above conversion may not be exact as I run 2 different programs to get the data in the same place, which should be only slightly different. However, I want to get the GPS data in degree from hex directly for further usage.
I would like to know how the conversion works as the hex numbers give -ve values for direct conversion to decimal.
Thanks for all the help!

Convert an alphanumeric string to integer format

I need to store an alphanumeric string in an integer column on one of my models.
I have tried:
#result.each do |i|
hex_id = []
i["id"].split(//).each{|c| hex_id.push(c.hex)}
hex_id = hex_id.join
...
Model.create(:origin_id => hex_id)
...
end
When I run this in the console using puts hex_id in place of the create line, it returns the correct values, however the above code results in the origin_id being set to "2147483647" for every instance. An example string input is "t6gnk3pp86gg4sboh5oin5vr40" so that doesn't make any sense to me.
Can anyone tell me what is going wrong here or suggest a better way to store a string like the aforementioned example as a unique integer?
Thanks.
Answering by request form OP
It seems that the hex_id.join operation does not concatenate strings in this case but instead sums or performs binary complement of the hex values. The issue could also be that hex_id is an array of hex-es rather than a string, or char array. Nevertheless, what seems to happen is reaching the maximum positive value for the integer type 2147483647. Still, I was unable to find any documented effects on array.join applied on a hex array, it appears it is not concatenation of the elements.
On the other hand, the desired result 060003008600401100500050040 is too large to be recorded as an integer either. A better approach would be to keep it as a string, or use different algorithm for producing a number form the original string. Perhaps aggregating the hex values by an arithmetic operation will do better than join ?

Objective C: Parsing JSON string

I have a string data which I need to parse into a dictionary object. Here is my code:
NSString *barcode = [NSString stringWithString:#"{\"OTP\": 24923313, \"Person\": 100000000000112, \"Coupons\": [ 54900012445, 499030000003, 00000005662 ] }"];
NSLog(#"%#",[barcode objectFromJSONString]);
In this log, I get NULL result. But if I pass only one value in Coupons, I get the results. How to get all three values ?
00000005662 might not be a proper integer number as it's prefixed by zeroes (which means it's octal, IIRC). Try removing them.
Cyrille is right, here is the autoritative answer:
The application/json Media Type for JavaScript Object Notation (JSON): 2.4 Numbers
The representation of numbers is similar to that used in most programming languages. A number contains an integer component that may be prefixed with an optional minus sign, which may be followed by a fraction part and/or an exponent part.
Octal and hex forms are not allowed. Leading zeros are not allowed.

Use of byte arrays and hex values in Cryptography

When we are using cryptography always we are seeing byte arrays are being used instead of String values. But when we are looking at the techniques of most of the cryptography algorithms they uses hex values to do any operations. Eg. AES: MixColumns, SubBytes all these techniques(I suppose it uses) uses hex values to do those operations.
Can you explain how these byte arrays are used in these operations as hex values.
I have an assignment to develop a encryption algorithm , therefore any related sample codes would be much appropriate.
Every four digits of binary makes a hexadecimal digit, so, you can convert back and forth quite easily (see: http://en.wikipedia.org/wiki/Hexadecimal#Binary_conversion).
I don't think I full understand what you're asking, though.
The most important thing to understand about hexadecimal is that it is a system for representing numeric values, just like binary or decimal. It is nothing more than notation. As you may know, many computer languages allow you to specify numeric literals in a few different ways:
int a = 42;
int a = 0x2A;
These store the same value into the variable 'a', and a compiler should generate identical code for them. The difference between these two lines will be lost very early in the compilation process, because the compiler cares about the value you specified, and not so much about the representation you used to encode it in your source file.
Main takeaway: there is no such thing as "hex values" - there are just hex representations of values.
That all said, you also talk about string values. Obviously 42 != "42" != "2A" != 0x2A. If you have a string, you'll need to parse it to a numeric value before you do any computation with it.
Bytes, byte arrays and/or memory areas are normally displayed within an IDE (integrated development environment) and debugger as hexadecimals. This is because it is the most efficient and clear representation of a byte. It is pretty easy to convert them into bits (in his mind) for the experienced programmer. You can clearly see how XOR and shift works as well, for instance. Those (and addition) are the most common operations when doing symmetric encryption/hashing.
So it's unlikely that the program performs this kind of conversion, it's probably the environment you are in. That, and source code (which is converted to bytes at compile time) probably uses a lot of literals in hexadecimal notation as well.
Cryptography in general except hash functions is a method to convert data from one format to another mostly referred as cipher text using a secret key. The secret key can be applied to the cipher text to get the original data also referred as plain text. In this process data is processed in byte level though it can be bit level as well. The point here the text or strings which we referring to are in limited range of a byte. Example ASCII is defined in certain range in byte value of 0 - 255. In practical when a crypto operation is performed, the character is converted to equivalent byte and the using the key the process is performed. Now the outcome byte or bytes will most probably be out of range of human readable defined text like ASCII encoded etc. For this reason any data to which a crypto function is need to be applied is converted to byte array first. For example the text to be enciphered is "Hello how are you doing?" . The following steps shall be followed:
1. byte[] data = "Hello how are you doing?".getBytes()
2. Process encipher on data using key which is also byte[]
3. The output blob is referred as cipherTextBytes[]
4. Encryption is complete
5. Using Key[], a process is performed over cipherTextBytes[] which returns data bytes
6 A simple new String(data[]) will return string value of Hellow how are you doing.
This is a simple info which might help you to understand reference code and manuals better. In no way I am trying to explain you the core of cryptography here.

How do I get the length (i.e. number of characters) of an ASCII string in VB.NET?

I'm using this code to return some string from a tcpclient but when the string comes back it has a leading " character in it. I'm trying to remove it but the Len() function is reading the number of bytes instead of the string itself. How can I alter this to give me the length of the string as I would normally use it and not of the array underlying the string itself?
Dim bytes(tcpClient.ReceiveBufferSize) As Byte
networkStream.Read(bytes, 0, CInt(tcpClient.ReceiveBufferSize))
' Output the data received from the host to the console.'
Dim returndata As String = Encoding.ASCII.GetString(bytes)
Dim LL As Int32 = Len(returndata)
Len() reports the number of bytes not the number of characters in the string.
Your code is currently somewhat broken. The answer is tcpClient.ReceiveBufferSize, regardless of how much data you actually received - because you're ignoring the return value from networkStream.Read. It could be returning just a few bytes, but you're creating a string using the rest of the bytes array anyway. Always check the return value of Stream.Read, because otherwise you don't know how much data has actually been read. You should do something like:
Dim bytesRead = networkStream.Read(bytes, 0, CInt(tcpClient.ReceiveBufferSize))
' Output the data received from the host to the console.'
Dim returndata As String = Encoding.ASCII.GetString(bytes, 0, bytesRead)
Now, ASCII always has a single character per byte (and vice versa) so the length of the string will be exactly the same as the length of the data you received.
Be aware that any non-ASCII data (i.e. any bytes over 127) will be converted to '?' by Encoding.ASCII.GetString. You may also get control characters. Is this definitely ASCII text data to start with? If it's not, I'd recommend hex-encoding it or using some other option to dump the exact data in a non-lossy way.
You could try trimming the string inside the call to Len():
Dim LL As Int32 = Len(returndata.Trim())
If Len reports the number of bytes and it doesn't match the number of characters, then I can think of two possibilities:
There are more chars being sent than you think (ie, that extra character is actually being sent)
The encoding is not ASCII, so there can be more than one byte per char (and one of them is that 'weird' character, that is the character is being sent and is not 'wrong data'). Try to find out if the data is really ASCII encoded, if not, change the call accordingly.
When I read you correctly, you get a single quotation mark at the beginning, right?
If you get that one consistently why not just subtract one from the string length? Or use a substring from the second character:
Len(returndata.Substring(1)
And I don't quite understand what you mean with »the length of the string as I would normally use it and not of the array underlying the string itself«. You have a string. Any array which might represent that string internally is entirely implementation-dependent and nothing you should see or rely on. Or am I getting you wrong here. The string is what you are using normally. I mean, if that's not what you do, then why not take the length of the string after processing it into something you would normally use?
Maybe I am missing something here, but what is wrong with String.Length?
Dim LL As Int32 = returndata.Length