Apache adds data to output of javascript file - apache

The weirdest thing, two of my javascript files have stopped being served due to incorrect mime type from apache. All my JS files have text/javascript, but two of them get application/octet-stream.
When troubleshooting I noticed that when I connect to the web server, it outputs "31c2" before the content of the file (see image). This is not an invisible character in the actual file, verified by hexdump. I am assuming that this is the source of the incorrect mime type reporting, but where does this come from? I noticed that after the file is output, apache also adds "0" on a single line.
How do I figure out what causes this? I might add that this file was last edited in 2017 and has worked flawlessly until today or yesterday, and I can't understand why.
Here are two requests side by side to a working .js file (left) and the one that reports incorrect mime type (right). There is no .htaccess file in any parent directory either.

Those things such as 31c2 that you see are hex encoded numbers. Now, if we decode 31c2, we get 12738. These strings only appear when you are using HTTP/1.1. Not when you are using HTTP/0.9, HTTP/1.0, HTTP/2.0, etc.
Why do these 'HEX' encoded numbers appear?
This occurs because HTTP/1.1 uses chunked transfer-encoding. Hence, you can see the header: Transfer-Encoding: chunked.
Chunked transfer-encoding has these hex strings:
CRLF a CRLF
Keep in mind that: CR = \r (carriage return), LF = \n (new line).
Now, for example, if you want to send Hello, World! to the user:
HTTP/1.1 200 OK
[CRLF]
Connection: Keep-Alive
[CRLF]
Transfer-Encoding: chunked
[CRLF]
[CRLF] # END OF HEADERS: the first hex won't contain another CRLF, idk why they chose to do this
5 # 5 in hex, is 5
[CRLF]
Hello
[CRLF] # first CRLF of hex
8 # 8 in hex, is 8
[CRLF] # second CRLF of hex
, World!
[CRLF] # first CRLF of hex encoding
0 # means this is the end of the transfer
[CRLF] # second CRLF of hex encoding
[CRLF] # contains third CRLF for the end too
HTTP/1.1 uses chunked transfer-encoding to send chunks as they are ready to be sent. Instead of sending all the data at once. This is especially useful for huge file transfers, where, with chunked transfer, the server doesn't need to calculate the size of the response in advance, saves time (this is also what causes the total-download size to be sometimes invisible when you are downloading stuff from some websites).
Why is your JS file not being detected as JavaScript?
It may be a bug in Apache. You should probably add this to your .htaccess/apache2.conf/httpd.conf to solve this issue:
AddType text/javascript js

Related

Exim filter - Can a base64 or quoted-printable encoded message body be automatically decoded like the headers?

I am attempting to filter unwanted incoming emails in my cPanel environment (with Exim as Mail Transfer Agent) based on the message body contents.
Often the message body is base64 or quoted-printable encoded (Content-Transfer-Encoding: quoted-printable or Content-Transfer-Encoding: base64), and in such cases
"$message_body contains <string>"
"$message_body matches <regexp>"
conditions fail because (I think) no decoding of the encoded message body takes place.
I read in The Exim Specification that for headers Exim decodes base64 or quoted-printable header text (an extract below):
$header_<header name>: or $h_<header name>:
$bheader_<header name>: or $bh_<header name>:
. bheader removes leading and trailing white space, and then decodes base64 or quoted-printable
MIME “words” within the header text, but does no character set translation.
. header tries to translate the string as decoded by bheader to a standard character set.
Can Exim decode base64 or quoted-printable encoded message body too? Can that be done in a cPanel & WHM environment?

Need Online Tool to Convert GZip compression text to ASCII (Readable) text

I am trying to view data in a Redis database.
It is compressed data using Lettuce 6.1.1 version compression library.
It uses a GZIP compression type.
I have tried several online tools to convert the GZIP text to a readable ASCII format.
The tools fail because it does not recognize the GZIP text as GZIP data. Maybe it has something to do with the compression algorithm lettuce uses to compress the data.
Can anyone point me to a tool where I can decompress this data to readable ascii text?
Here is an example of the compressed data:
\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x00\xABVN-\xCBLNu,JM\xF4+\xCDMJ-R\xB2R2604\xB44Q\xAA\x05\x00\x190\x9B\xD1\x1E\x00\x00\x00
This should translate to a number: 301194
Here is a second example:
1.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003602\xB04\x01\x00\x93\xC0t\xC3\x06\x00\x00\x00
2.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003602\xB0\xB0\x04\x00o\x8D\xDE\xA4\x06\x00\x00\x00
3.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003602\xB04\x07\x00)\x91}Z\x06\x00\x00\x00
4.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003602\xB04\x03\x00\xBF\xA1z-\x06\x00\x00\x00
5.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003602\xB04\x00\x00\x8A\x04\x19\xC4\x06\x00\x00\x00
6.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003602\xB04\x02\x00\xA6e\x17*\x06\x00\x00\x00
7.\x1F\x8B\x08\x00\x00\x00\x00\x00\x00\x003604\xB44\x01\x00J\x05\x03\xD0\x06\x00\x00\x00
This should be a list of 7 service area numbers.
Not sure of the order but the values should be:
302090
302092
302097
302094
302096
302089
301194
I tried using this online tool:
https://codebeautify.org/gzip-decompress-online
There is no translation that appears in the translation window and no error is shown.
I also tried a this website:
https://www.multiutil.com/gzip-to-text-decompress/
I get the error: Invalid compression text
UPDATE
The RedisInsight screenshot below shows the key-value information. The value information that is compressed as gzip I would like to translate.
I wanted to copy the value that I have highlighted and decompress it so I can document what is stored in the database.
There is nothing wrong with your examples 1 through 7. They are all valid gzip streams that decompress to:
302094
302089
302097
302096
302090
302092
301194
Your first example in your question however has an error in the integtity check at the end. It decodes to:
{#eviceAreaNumber":"301194"}
While the deflate compressed data in the gzip stream is valid, the CRC that follows it is not. The uncompressed length after that is incorrect as well.
The online tools you point to are expecting Base64 encoded data. Not the partial hex encodings you are trying there.

How to specify this particular header in Postman

Resource URL
GET https://<MATD_IP>/php/session.php
The following HTTP headers should be specified in the session request:
Accept: application/vnd.ve.v1.0+json
Content-Type: application/json
VE-SDK-API: Base64 encoded "user name:password" string
VE-API-Version (Optional)
I am confused onto what does it mean by specifying base64 encoded string. I have tried to do it but I am failing at it. Can anybody help me with the exact header parameters by giving an example.
Thank you
You could use this in your Pre-request Script:
let base64 = Buffer.from("username:password").toString('base64')
pm.request.headers.add({key: "VE-SDK-API", value: base64})
This will convert to Base64 and then create the header with the encoded value.
It most likely means that you need to provide a base64 string for that field. Write down the credentials with a : in between. Ex:
cooluser:str0ngP4ssword
Then you encode this exact string as base64 which would give you:
Y29vbHVzZXI6c3RyMG5nUEBzc3dvcmQ=
You can encode via terminal (Linux) echo "XXX" | base64 or just search for "base64 encode" on the WEB (not really recommended due to security reasons).
You can then use it for the headers:
Accept: application/vnd.ve.v1.0+json
Content-Type: application/json
VE-SDK-API: Y29vbHVzZXI6c3RyMG5nUEBzc3dvcmQ=
VE-API-Version 1.x
Omit echoing trailing new line using option -n (for not needed):
echo -n "username:password" | base64

SHA256 generation different for file and content of this file

I use online SHA256 converters to calculate a hash for a given file. There, I have seen an effect I don't understand.
For testing purposes, I wanted to calculate the hash for a very simple file. I named it "test.txt", and its only content is the string "abc", followed by a new line (I just pressed enter).
Now, when I put "abc" and newline into a SHA256 generator, I get the hash
edeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb
But when I put the complete file into the same generator, I get the hash
552bab6864c7a7b69a502ed1854b9245c0e1a30f008aaa0b281da62585fdb025
Where does the difference come from? I used this generator (in fact, I tried several ones, and they always yield the same result):
https://emn178.github.io/online-tools/sha256_checksum.html
Note that this different does not arise without newlines. If the file just contains the string "abc", the hash is
ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad
for the file as well as just for the content.
As noted in my comment, the difference is caused by how newline characters are represented across different operating systems (see details here):
On UNIX and UNIX-like systems, newlines are represented by a line feed character (\n).
On DOS and Windows systems, newlines are represented by a carriage return followed by a line feed character (\r\n).
Compare the following two commands and their output, corresponding to the SHA256 values in your question:
echo -en "abc\n" | sha256sum
edeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb
echo -en "abc\r\n" | sha256sum
552bab6864c7a7b69a502ed1854b9245c0e1a30f008aaa0b281da62585fdb025
The issue you are having could come from the character encoding of the new line.
In windows the new line is escaped with \r\n and in linux is escaped with \n.
These 2 have a different dec value (\r is 13 and \n is 10).
More info you can find here:
https://en.wikipedia.org/wiki/Newline
https://en.wikipedia.org/wiki/List_of_Unicode_characters
Even i faced same issue. but providing the data in hex mode helped to understand the actual behavior.
Canonicalization of data needs to be performed before SHA calculations which will eliminate such issues. Canonicalization needs to be performed both at Generation side and also at verification side.

dll files compared to gzip files

Okay, the title isn't very clear.
Given a byte array (read from a database blob) that represents EITHER the sequence of bytes contained in a .dll or the sequence of bytes representing the gzip'd version of that dll, is there a (relatively) simple signature that I can look for to differentiate between the two?
I'm trying to puzzle this out on my own, but I've discovered I can save a lot of time by asking for help. Thanks in advance.
Check if it's first two bytes are the gzip magic number 0x1f8b (see RFC 1952). Or just try to gunzip it, the operation will fail if the DLL is not gzip'd.
A gzip file should be fairly straight forward to determine as it ought to consist of a header, footer and some other distinguishable elements in between.
From Wikipedia:
"gzip" is often also used to refer to
the gzip file format, which is:
a 10-byte header, containing a magic
number, a version number and a time
stamp
optional extra headers, such as
the original file name
a body,
containing a DEFLATE-compressed
payload
an 8-byte footer, containing a
CRC-32 checksum and the length of the
original uncompressed data
You might also try determining if the gzip contains any records/entries as each will also have their own header.
You can find specific information on this file format (specifically the member header which is linked) here.