e-EDID header is different from standard? - hdmi

I'm reading EDID information and receiving weird headers from two of my monitors.
They can both play sound, so they must be running e-EDIDs.
From what I've read, though, the header information doesn't change from an EDID to an e-EDID.
What it should be
00 FF FF FF FF FF FF 00
What I'm getting
00 FF FF FF 59 65 00 00
00 FF FF FF 4C 5F 00 00
Do e-EDIDs have different headers than EDIDs, and what specification can I read to find out more?
My reading:
https://en.wikipedia.org/wiki/Extended_Display_Identification_Data
http://read.pudn.com/downloads110/ebook/456020/E-EDID%20Standard.pdf

Related

How to format/decode service logs from Docker API

I'm trying to get logs from the Docker API at this endpoint. I'm just trying to get the logs returned as a string, not using the websocket option. It mostly works, but the string contains strange characters that I'm not sure what to do with.
I'm using Axios, with Express, like so:
let result = await AXIOS.get(`http://${managerNodeIPAddress}/services/${idForLogs}/logs?stdout=true&stderr=true`);
and if I console.log(result), the data property looks like this:
data: '\x01\x00\x00\x00\x00\x00\x00#Example app listening on port 5000\n' +
'\x01\x00\x00\x00\x00\x00\x00\x1F[16/4/2022-21:05:02] GET/: 200\n' +
'\x01\x00\x00\x00\x00\x00\x00\x1F[16/4/2022-21:05:43] GET/: 200\n' +
'\x01\x00\x00\x00\x00\x00\x00\x1F[16/4/2022-21:05:44] GET/: 200\n' +
'\x01\x00\x00\x00\x00\x00\x00\x1F[16/4/2022-21:06:33] GET/: 200\n' +
// ...
and if I console.log(result.data), it looks like this:
<Buffer 01 00 00 00 00 00 00 23 45 78 61 6d 70 6c 65 20 61 70 70 20 6c 69 73 74 65 6e 69 6e 67 20 6f 6e 20 70 6f 72 74 20 35 30 30 30 0a 01 00 00 00 00 00 00 ... 972 more bytes>
If I send along this response, and try to view it response in Postman, or elsewhere, the viewer doesn't know what to do with the initial \x01-type strings:
I gather that they are escaped binary, or something along those lines, and I need to change something about my request headers, or parse the axios response, in a particular way, to deal with this. I would be happy either
decoding those characters into whatever they are supposed to be (I've tried "decoding" the buffer, using toString('utf-8), etc, but that doesn't seem to get rid of the characters, so they still show up strange when passed along and viewed in certain contexts.). OR,
getting rid of those characters entirely (I tried to do the later with the replace method, but it isn't working for some reason).
I've never dealt with this before, so the world of encoding/decoding things like this feels a bit mysterious, and I would appreciate any pointers anyone might have.
I think I was able to figure this out. As I understand things now, the 00 and 01 are (or represent?) hex bytes, which correspond to the ASCII characters SOH (start of header) and NUL (null). The mac terminal doesn't have trouble interpreting them, but it appears some other applications do. I was able to get rid of them by filtering the buffer array, like so:
let logs = result.data.filter(byte => byte !== 01 && byte !== 00).toString();
I was honestly a little surprised this worked, but it seems to have. It doesn't affect how the logs look in the terminal, and they look fine in Postman now.

Sending `Encrypted Extension` and `Server Finished` in one handshake message. Is it mandatory in TLS1.3?

As per RFC 8446 (TLSv1.3) [https://www.rfc-editor.org/rfc/rfc8446]
Encrypted Extension and Finished are two different handshake messages.
But in RFC 8448 (Example Handshake Traces for TLS 1.3) [https://www.rfc-editor.org/rfc/rfc8448]
In all examples of this trace document, Encrypted Extension (message type 0x08) and Server Finished
(message type 0x14) messages are concatenated and send together.
Refer page number 23 and 24 of RFC 8446.
payload (80 octets): **08** 00 00 28 00 26 00 0a 00 14 00 12 00 1d 00
17 00 18 00 19 01 00 01 01 01 02 01 03 01 04 00 1c 00 02 40 01
00 00 00 00 00 2a 00 00 **14** 00 00 20 48 d3 e0 e1 b3 d9 07 c6 ac
ff 14 5e 16 09 03 88 c7 7b 05 c0 50 b6 34 ab 1a 88 bb d0 dd 1a
34 b2
I know by adding two handshake messages (if they are sent by one entity immediately one after other) together will increase performance and RFC 8446 provide this provision.
But is this really mandatory by any server implementation to send Encrypted Extension and Server Finished messages together?
Or Server and Client should support both implementations i.e.
a) Sending Encrypted Extension and Server Finished messages separately one by one.
b) Sending Encrypted Extension and Server Finished message together in one handshake message.
TLS is send over TCP. TCP is a byte stream which has no concept of messages and thus has no concept of "messages send together" too. Two send at the application level or from within the TLS stack might end up within the same TCP packet the same as one send might be spread over multiple TCP packets.
In other words: since the TCP layer underlying TLS is only a byte stream which can be packetized in arbitrary ways not controlled by the upper layer, it would be impossible to follow a mandatory requirement of sending multiple TLS messages in the same TCP packet.

EOF file sentinal 0xFFFFFFFF

In vb.NET
This should not be difficult. I need to write an EOF marker of 0xFF FF FF FF to a file. This is a simulated TAPE file on disk.
If I instatiate a BinaryWriter() called "bw"
Then at the end of my data writing seession I write:
bw.Write(255) ==> will output "FF 00 00 00" in the file in Little Endian format
However the HEX sentinal I require of FF FF FF FF is equivalent to 4,294,967,295 (Int64) and just for grinds I exrcute:
bw.Write(4,294,967,295)
Yields FF FF FF FF 00 00 00 00
Closer but not correct and I had to use a Int64 number.
Theoretically I could generate four instantiations of "FF 00 00 00" (255) and concatentate the FF's but that doesn't seem legit.

String Serialization in utf-8 using Node Buffer

I have a sql database storing a blob using unhex('6BFD3D0AFDFD4E01FDFD67703A34757F').
The server retrieves the blob and stores it in a Node Buffer as <Buffer 6b 8a 3d 0a 9b eb 4e 01 96 a6 67 70 3a 34 75 7f>.
The server serializes the buffer and send it to the client using buffer.toString() which defaults to utf8 encoding.
The client receives and deserializes the buffer using Buffer.from(buffer, 'utf8'), which results in <Buffer 6b ef bf bd 3d 0a ef bf bd ef bf bd 4e 01 ef bf bd ef bf bd 67 70 3a 34 75 7f> and then if I convert it back to hex using .toString('hex') I get 6BEFBFBD3D0AEFBFBDEFBFBD4E01EFBFBDEFBFBD67703A34757F.
So to sum it all up, if I do:
let startHex = "6BFD3D0AFDFD4E01FDFD67703A34757F"
let buffer = Buffer.from(hex, 'hex')
let endHex = Buffer.from(buffer.toString()).toString('hex').toUpperCase())
console.log(endHex)
The output is:
6BEFBFBD3D0AEFBFBDEFBFBD4E01EFBFBDEFBFBD67703A34757F
My question is why is startHex and endHex different? They aren't just different. They look similar except the endHex has extra characters. I know I get the correct output if I serialize the buffer between the server and the client using base64 or binary, but for my project it is easier if the client is able to figure out startHex given the serialized buffer using utf8. The reason is that I do not have access to the inner workings of the server which actually calls buffer.toString() before sending to the client, so I cannot change the encoding.
You have invalid UTF-8 characters in your original input. The invalid UTF-8 replacement character has bytes EFBFBD and you can see that several times in the output.

What is the exact procedure to perform external authentication?

I am trying to perform external authentication on smart card, I got the 8 byte challenge from the card and then I need to generate the card cryptogram on that 8 bytes.
But I don't know how to perform that cryptogram operation (smartcard tool kit converting 8 bytes to 72 bytes).
The following commands are generated by the tool kit
00 A4 04 00 0C A0 00 00 02 43 00 13 00 00 00 01 04
00 22 41 A4 06 83 01 01 95 01 80
command: 80 84 00 00 08 Response: (8 bytes challenge)
command: 80 82 00 00 48 (72 bytes data)
Can any body say what are the steps to follow to convert 8 byte challenge to 72 bytes ?
Conversion is not exactly the right term. You need to apply the cryptographic algorithm with the correct key to the received challenge. I assume, that an External Authenticate command is performed, but the strange data field length allows no assumption on the algorithm used. Possibly an external challenge is also provided in the command and session keys are established. Since the assumed Get Challenge command and the External Authenticate command have a class byte indicating a proprietary command, ISO 7816-4 won't help here and you need to refer to the card specification. To get knowledge of the key you probably have to sign a non-disclosure agreement with the card issuer.