Different encodings in Kannel for sms ID in submit_sm_response - smpp

We have multiple working SMPP connections set up working well with DLRs for outgoing messages, but one new operator seems to be sending the "message_id" paramater as a regular string that is hex-koded instead of the usual Octet string we get from the other operators.
Example of how our usual message_id field looks (from kannel.log)
message_id:
Octet string at 0x7fb584042eb0:
len: 32
size: 33
immutable: 0
data: 31 38 32 37 64 61 33 30 34 33 61 30 30 30 31 37 1827da3043a00017
data: 35 33 36 36 65 64 63 61 37 32 38 63 62 33 37 31 5366edca728cb371
This works fine, and the ID ("1827da3043a000175366edca728cb371") gets decoded fine and is the same as in the DLR
For this new route however it looks like this:
message_id: "763307F1"
which is the hex encoded version of the ID (but without 0x in the beginning). Then we get the DLR PDU with id: 1983055857 (which is 0x763307F1 in decimal). This causes the sql query to match the DLR with the outgoing message to fail, and the DLR is thrown away.
Is it possible to force kannel to store the ID in decimal instead of hex, or is this a error that needs to be corrected on the operators side for this to work? What causes the two IDs to be encoded and sent differently?

Related

PLC S7-1500 SQL-Connection - received data shifted by 1 byte --why?

i´ve been successfully using a S7 1500 PLC in combination with an SQL-Server for quiete some time now.
I set everything up like mentioned here: S7SQL-Guide-Stackoverflow
Today i tried to add a second parameter to my SQL-Query. So I made something like this:
select Number1,Number2 from MYTABLE WHERE Apple = red and Sky = blue
The S7 sends the telegram, and the SQL-Server replies. So far so good.
I set up the size of TokenColumnMetaData accordingly to my wireshark record,
compiled and send the updates to my PLC.
Now the part which I don´t understand:
I am expecting to receive the value "12345"
so again I used wireshark to see what I should expect:
So what I got is: 39 30 00 00, which is 12345 just the bytes are reversed -no problem so far, but when I check on S7-side, I see this:
My input is shifted by 1 Byte.
How can I solve this?
Unfortantely I don´t have deeper knowledge of the code provided by Siemens for this application.
Edit:
Screenshot of typeUseCaseSpecificTokenrow
Sometimes weird memory stuff happens when the data type comprises an odd number of bytes.
Siemens starts each element on an even memory address. So if Length is at address 0 and Data is at 2-5, then address 1 may be getting the first byte that is intended for Data
Address
Data
Element
00
04
Length 0
01
39
02
30
Data 0, byte 0
03
00
Data 0, byte 1
04
00
Data 0, byte 2
05
08
Data 0, byte 3
06
47
Length 1
07
94
08
03
Data 1, byte 0
09
00
Data 1, byte 1
10
00
Data 1, byte 2
11
00
Data 1, byte 3
12
00
Length 2
13
00
14
FD
Data 2, byte 0
15
10
Data 2, byte 1
16
00
Data 2, byte 2
17
C1
Data 2, byte 3

Prevent Envoy from modifying the sharding key

We use a two-layer Envoy setup.
[front-end] -> E -> [middleware] -> E -> [backend]
Middleware is supposed to take the sharding key from the HTTP metadata and re-transmit it when talking to the backend.
What we have noticed is that Envoy modifies the HTTP header, which is crashing our service inside gRPC.
E1016 11:19:45.808599731 19 call.cc:912] validate_metadata: {"created":"#1602847185.808584663","description":"Illegal header value","file":"external/com_github_grpc_grpc/src/core/lib/surface/validate_metadata.cc","file_line":44,"offset":56,"raw_bytes":"36 37 36 38 33 61 34 34 36 35 36 35 37 30 34 33 36 66 36 34 36 35 34 31 34 39 33 61 36 35 36 33 36 63 36 39 37 30 37 33 36 35 32 64 37 30 36 63 37 35 36 37 36 39 36 65 a5 '67683a44656570436f646541493a65636c697073652d706c7567696e.'\u0000"}
E1016 11:19:45.808619606 19 call_op_set.h:947] assertion failed: false
Any way to avoid this?
UPDATE:
Seems to be only happening with x- headers.
The problem was actually not related to Envoy in the end. Turns out that gRPC strings are not null terminated.

Extracting data from a .DLL: unknown file offsets

I'm currently trying to extract some data from a .DLL library - I've figured out the file structure (there are 1039 data blocks compressed with zlib, starting at offset 0x3c00, the last one being the fat table). The fat table itself is divided into 1038 "blocks" (8 bytes + a base64 encoded string - the filename). As far as I've seen, byte 5 is the length of the filename.
My problem is that I can't seem to understand what bytes 1-4 are used for: my first guess was that they were an offset to locate the file block inside the .DLL (mainly because the values are increasing throughout the table), but for instance, in this case, the first "block" is:
Supposed offset: 2E 78 00 00
Filename length: 30 00 00 00
Base64 encoded filename: 59 6D 46 30 64 47 78 6C 58 32 6C 75 64 47 56 79 5A 6D 46 6A 5A 56 78 42 59 33 52 70 64 6D 56 51 5A 58 4A 72 63 31 4E 6F 62 33 63 75 59 77 3D 3D
yet, as I said earlier, the block itself is at 0x3c00, so things don't match. Same goes for the second block (starting at 0x3f0b, whereas the table supposed offset is 0x167e)
Any ideas?
Answering my own question lol
Anyway, those numbers are the actual offsets of the file blocks, except for the fact that the first one starts from some random number instead than from the actual location of the first block. Aside from that, though, differences between each couple of offsets do match the length of the corresponding block.

How to return LOW VALUES HEX '00' in sql statement?

I need to write into file (in the middle of the string) a LOW VALUES HEX'00'.
I could do it using package utl_file using the next code utl_file.put_raw(v_file, hextoraw('000000')). But I may do it only in the beginning and end of file, not in the middle of string.
So, my question is: how to write a LOW VALUES HEX'00' in the select statement.
I tried some variants like
Select ‘blablabla’ Q, hextoraw('000000'), ‘blablabla’ w from dual;
save it into .dat file, then open it in hex-editor but the result was different when using utl_file.
Could anybody (if it's possible) write a correct sql statement.
If I understand you correctly, you're trying to add a null/binary zero to your output. If so, you can just use chr(0).
eg. utl_file.putf(l_file, 'This is a binary zero' || chr(0));
Looking at that in a hex editor will show you:
00000000 54 68 69 73 20 69 73 20 61 20 62 69 6e 61 72 79 |This is a binary|
00000010 20 7a 65 72 6f 00 0a | zero..|

What does a zlib header look like?

In my project I need to know what a zlib header looks like. I've heard it's rather simple but I cannot find any description of the zlib header.
For example, does it contain a magic number?
zlib magic headers
78 01 - No Compression/low
78 9C - Default Compression
78 DA - Best Compression
Link to RFC
0 1
+---+---+
|CMF|FLG|
+---+---+
CMF (Compression Method and flags)
This byte is divided into a 4-bit compression method and a 4-
bit information field depending on the compression method.
bits 0 to 3 CM Compression method
bits 4 to 7 CINFO Compression info
CM (Compression method)
This identifies the compression method used in the file. CM = 8
denotes the "deflate" compression method with a window size up
to 32K. This is the method used by gzip and PNG and almost everything else.
CM = 15 is reserved.
CINFO (Compression info)
For CM = 8, CINFO is the base-2 logarithm of the LZ77 window
size, minus eight (CINFO=7 indicates a 32K window size). Values
of CINFO above 7 are not allowed in this version of the
specification. CINFO is not defined in this specification for
CM not equal to 8.
In practice, this means the first byte is almost always 78 (hex)
FLG (FLaGs)
This flag byte is divided as follows:
bits 0 to 4 FCHECK (check bits for CMF and FLG)
bit 5 FDICT (preset dictionary)
bits 6 to 7 FLEVEL (compression level)
The FCHECK value must be such that CMF and FLG, when viewed as
a 16-bit unsigned integer stored in MSB order (CMF*256 + FLG),
is a multiple of 31.
FLEVEL (Compression level)
These flags are available for use by specific compression
methods. The "deflate" method (CM = 8) sets these flags as
follows:
0 - compressor used fastest algorithm
1 - compressor used fast algorithm
2 - compressor used default algorithm
3 - compressor used maximum compression, slowest algorithm
ZLIB/GZIP headers
Level | ZLIB | GZIP
1 | 78 01 | 1F 8B
2 | 78 5E | 1F 8B
3 | 78 5E | 1F 8B
4 | 78 5E | 1F 8B
5 | 78 5E | 1F 8B
6 | 78 9C | 1F 8B
7 | 78 DA | 1F 8B
8 | 78 DA | 1F 8B
9 | 78 DA | 1F 8B
Deflate doesn't have common headers
The ZLIB header (as defined in RFC1950) is a 16-bit, big-endian value - in other words, it is two bytes long, with the higher bits in the first byte and the lower bits in the second.
It contains these bitfields from most to least significant:
CINFO (bits 12-15, first byte)
Indicates the window size as a power of two, from 0 (256 bytes) to 7 (32768 bytes). This will usually be 7. Higher values are not allowed.
CM (bits 8-11)
The compression method. Only Deflate (8) is allowed.
FLEVEL (bits 6-7, second byte)
Roughly indicates the compression level, from 0 (fast/low) to 3 (slow/high)
FDICT (bit 5)
Indicates whether a preset dictionary is used. This is usually 0.
(1 is technically allowed, but I don't know of any Deflate formats that define preset dictionaries.)
FCHECK (bits 0-4)
A checksum (5 bits, 0..31), whose value is calculated such that the entire value divides 31 with no remainder.*
Typically, only the CINFO and FLEVEL fields can be freely changed, and FCHECK must be calculated based on the final value. Assuming no preset dictionary, there is no choice in what the other fields contain, so a total of 32 possible headers are valid. Here they are:
FLEVEL: 0 1 2 3
CINFO:
0 08 1D 08 5B 08 99 08 D7
1 18 19 18 57 18 95 18 D3
2 28 15 28 53 28 91 28 CF
3 38 11 38 4F 38 8D 38 CB
4 48 0D 48 4B 48 89 48 C7
5 58 09 58 47 58 85 58 C3
6 68 05 68 43 68 81 68 DE
7 78 01 78 5E 78 9C 78 DA
The CINFO field is rarely, if ever, set by compressors to be anything other than 7 (indicating the maximum 32KB window), so the only values you are likely to see in the wild are the four in the bottom row (beginning with 78).
* (You might wonder if there's a small amount of leeway on the value of FCHECK - could it be set to either of 0 or 31 if both pass the checksum? In practice though, this can only occur if FDICT=1, so it doesn't feature in the above table.)
Following is the Zlib compressed data format.
+---+---+
|CMF|FLG| (2 bytes - Defines the compression mode - More details below)
+---+---+
+---+---+---+---+
| DICTID | (4 bytes. Present only when FLG.FDICT is set.) - Mostly not set
+---+---+---+---+
+=====================+
|...compressed data...| (variable size of data)
+=====================+
+---+---+---+---+
| ADLER32 | (4 bytes of checksum)
+---+---+---+---+
Mostly, FLG.FDICT (Dictionary flag) is not set. In such cases the DICTID is simply not present. So, the total hear is just 2 bytes.
The header values(CMF and FLG) with no dictionary are defined as follows.
CMF | FLG
0x78 | 0x01 - No Compression/low
0x78 | 0x9C - Default Compression
0x78 | 0xDA - Best Compression
More at ZLIB RFC
All answers here are most probably correct, however - if you want to manipulate ZLib compression stream directly, and it was produced by using gz_open, gzwrite, gzclose functions - then there is extra 10 leading bytes header before zlib compression steam comes - and those are produced by function gz_open - header looks like this:
fprintf(s->file, "%c%c%c%c%c%c%c%c%c%c", gz_magic[0], gz_magic[1],
Z_DEFLATED, 0 /*flags*/, 0,0,0,0 /*time*/, 0 /*xflags*/, OS_CODE);
And results in following hex dump: 1F 8B 08 00 00 00 00 00 00 0B
followed by zlib compression stream.
But there is also trailing 8 bytes - they are uLong - crc over whole file, uLong - uncompressed file size - look for following bytes at end of stream:
putLong (s->file, s->crc);
putLong (s->file, (uLong)(s->in & 0xffffffff));