EOF file sentinal 0xFFFFFFFF - vb.net

In vb.NET
This should not be difficult. I need to write an EOF marker of 0xFF FF FF FF to a file. This is a simulated TAPE file on disk.
If I instatiate a BinaryWriter() called "bw"
Then at the end of my data writing seession I write:
bw.Write(255) ==> will output "FF 00 00 00" in the file in Little Endian format
However the HEX sentinal I require of FF FF FF FF is equivalent to 4,294,967,295 (Int64) and just for grinds I exrcute:
bw.Write(4,294,967,295)
Yields FF FF FF FF 00 00 00 00
Closer but not correct and I had to use a Int64 number.
Theoretically I could generate four instantiations of "FF 00 00 00" (255) and concatentate the FF's but that doesn't seem legit.

Related

Sending `Encrypted Extension` and `Server Finished` in one handshake message. Is it mandatory in TLS1.3?

As per RFC 8446 (TLSv1.3) [https://www.rfc-editor.org/rfc/rfc8446]
Encrypted Extension and Finished are two different handshake messages.
But in RFC 8448 (Example Handshake Traces for TLS 1.3) [https://www.rfc-editor.org/rfc/rfc8448]
In all examples of this trace document, Encrypted Extension (message type 0x08) and Server Finished
(message type 0x14) messages are concatenated and send together.
Refer page number 23 and 24 of RFC 8446.
payload (80 octets): **08** 00 00 28 00 26 00 0a 00 14 00 12 00 1d 00
17 00 18 00 19 01 00 01 01 01 02 01 03 01 04 00 1c 00 02 40 01
00 00 00 00 00 2a 00 00 **14** 00 00 20 48 d3 e0 e1 b3 d9 07 c6 ac
ff 14 5e 16 09 03 88 c7 7b 05 c0 50 b6 34 ab 1a 88 bb d0 dd 1a
34 b2
I know by adding two handshake messages (if they are sent by one entity immediately one after other) together will increase performance and RFC 8446 provide this provision.
But is this really mandatory by any server implementation to send Encrypted Extension and Server Finished messages together?
Or Server and Client should support both implementations i.e.
a) Sending Encrypted Extension and Server Finished messages separately one by one.
b) Sending Encrypted Extension and Server Finished message together in one handshake message.
TLS is send over TCP. TCP is a byte stream which has no concept of messages and thus has no concept of "messages send together" too. Two send at the application level or from within the TLS stack might end up within the same TCP packet the same as one send might be spread over multiple TCP packets.
In other words: since the TCP layer underlying TLS is only a byte stream which can be packetized in arbitrary ways not controlled by the upper layer, it would be impossible to follow a mandatory requirement of sending multiple TLS messages in the same TCP packet.

e-EDID header is different from standard?

I'm reading EDID information and receiving weird headers from two of my monitors.
They can both play sound, so they must be running e-EDIDs.
From what I've read, though, the header information doesn't change from an EDID to an e-EDID.
What it should be
00 FF FF FF FF FF FF 00
What I'm getting
00 FF FF FF 59 65 00 00
00 FF FF FF 4C 5F 00 00
Do e-EDIDs have different headers than EDIDs, and what specification can I read to find out more?
My reading:
https://en.wikipedia.org/wiki/Extended_Display_Identification_Data
http://read.pudn.com/downloads110/ebook/456020/E-EDID%20Standard.pdf

String Serialization in utf-8 using Node Buffer

I have a sql database storing a blob using unhex('6BFD3D0AFDFD4E01FDFD67703A34757F').
The server retrieves the blob and stores it in a Node Buffer as <Buffer 6b 8a 3d 0a 9b eb 4e 01 96 a6 67 70 3a 34 75 7f>.
The server serializes the buffer and send it to the client using buffer.toString() which defaults to utf8 encoding.
The client receives and deserializes the buffer using Buffer.from(buffer, 'utf8'), which results in <Buffer 6b ef bf bd 3d 0a ef bf bd ef bf bd 4e 01 ef bf bd ef bf bd 67 70 3a 34 75 7f> and then if I convert it back to hex using .toString('hex') I get 6BEFBFBD3D0AEFBFBDEFBFBD4E01EFBFBDEFBFBD67703A34757F.
So to sum it all up, if I do:
let startHex = "6BFD3D0AFDFD4E01FDFD67703A34757F"
let buffer = Buffer.from(hex, 'hex')
let endHex = Buffer.from(buffer.toString()).toString('hex').toUpperCase())
console.log(endHex)
The output is:
6BEFBFBD3D0AEFBFBDEFBFBD4E01EFBFBDEFBFBD67703A34757F
My question is why is startHex and endHex different? They aren't just different. They look similar except the endHex has extra characters. I know I get the correct output if I serialize the buffer between the server and the client using base64 or binary, but for my project it is easier if the client is able to figure out startHex given the serialized buffer using utf8. The reason is that I do not have access to the inner workings of the server which actually calls buffer.toString() before sending to the client, so I cannot change the encoding.
You have invalid UTF-8 characters in your original input. The invalid UTF-8 replacement character has bytes EFBFBD and you can see that several times in the output.

How is PCI ROM shadowed?

in several resources I found that: ROM image must be copied to RAM to 000C0000h through 000DFFFFh. If the Class Code indicates that this is the VGA device ROM, its code must be copied into memory starting at location 000C0000h.
1: What if I have PCI hungry hungry hippo card that has ROM bigger than 128KB?
2: What if I have regular PCI device that has ROM 64KB but I have 4 of them? Are they loaded sequentially into this memory range? If so (though I doubt that) how is the code image preserved between init and boot phase?
3: What would happen if BIOS decided to go nonconformist and designated a different memory location? Why is it important to use this range anyway?
4: how the hell is regular case different from VGA interface? Is it just the limit that makes the difference?
non-UEFI BIOS Option ROMs aren't typically that large and had size restrictions, but UEFI drivers can be larger and placed above 1MiB in a UEFI bios, which switches out of real mode during the SEC phase. You can disable certain option ROMs that you don't want to be shadowed to the space next boot.
This small range was used because non-UEFI BIOSes operate in real mode (interrupt table (IVT as opposed to IDT) starts at 0h as a guarantee to option ROMs) and therefore can only access the first 1MiB of memory and need memory for other things (BIOS data, BIOS, stack/heap for BIOS and option ROMs). Although, most BIOSes ended up using unreal mode or protected mode with virtual 8086 so they could use the IVT and address 32 bits at the same time, so nothing is stopping the BIOS from shadowing option ROMs elsewhere in RAM and the BIOS scanning that region (I have read that E0000–EFFFF can also be used if the BIOS is only 64KiB), except this gets complicated if the option ROMs themselves look for other option ROMs like the UNDI ROM looking for the BC ROM on a PxE NIC. They also used PMM services to allocate 16/32 bit heap addresses. UEFI no longer uses BIOS interrupt services; it uses EFI functions.
The option ROM for onboard graphics is in the BIOS itself and its address is known. It is classically hardcoded to be moved to C0000h (C000:0000h in real mode segmentation notation), or indeed wherever it wants. The BIOS makes sure at least that it is the first option ROM that is executed, but if it knows it moved it to D0000h for instance then it knows where to pass control. If it were allowed to be at a variable address rather than a fixed address then it could end up not being able to fit in the range if other PCIe cards are shadowed first. Also, it would have to scan for the VGA bios class code first and then scan the range again or keep an internal table of where option ROMs are, which is more convoluted than a simple routine that linearly iterates across the space and the video BIOS always happens to be executed first. So it needs a fixed address, and if it makes the fixed address D0000h rather than C0000h and places other option ROMs around it, then external fragmentation of the space will occur.
The PAMs aren't in the northbridge anymore (which doesn't exist on modern Intel CPUs), they're used as part of the SADs' configuration in the L3 cache slice, which decode the address and either send a request to the memory controller, DMI, PCIe link or processor graphics. The BIOS can set the PAM so that reads for a certain range are sent to DMI and writes are sent to the memory controller. That way the BIOS can shadow onto itself and PCI(e) XROMBARs can be set to the same exact address to which they're going to be shadowed.
On my system (Kaby lake + C230 series PCH + UEFI Secure Boot disabled), there are 3 option ROMs in the C0000h–DFFFFh region.
VGA BIOS
Option ROM Header: 0x000C0000
55 AA 80 E9 91 F9 30 30 30 30 30 30 30 30 30 30 U.....0000000000
30 30 A8 2F E9 B1 2E AF 40 00 90 0B 00./....#...
Signature 0xAA55
Length 0x80 (65536 bytes)
Initialization entry 0x30F991E9 //software read this wrong it's actually 0xF991E9, which is a 16 bit relative jump 0xF991; -1647
Reserved 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30
Reserved 0x30 0xA8 0x2F 0xE9 0xB1 0x2E 0xAF
PCI Data Offset 0x0040 //offset is from start of OpROM header
Expansion Header Offset 0x0B90
PCI Data Structure: 0x000C0040
50 43 49 52 86 80 06 04 1C 00 1C 00 03 00 00 03 PCIR............
80 00 00 00 00 80 80 00 ........
Signature PCIR
Vendor ID 0x8086 - Intel Corporation
Device ID 0x0406
Product Data 0x001C
Structure Length 0x001C
Structure Revision 0x03
Class Code 0x00 0x00 0x03
Image Length 0x0080
Revision Level 0x0000
Code Type 0x00
Indicator 0x80
Reserved 0x0080
SATA Controller (PnP Expansion Header for each disk)
Option ROM Header: 0x000D0000
55 AA 4D B8 00 01 CB 00 00 00 00 00 00 00 00 00 U.M.............
00 00 00 00 00 00 00 15 A0 00 9A 01 ............
Signature 0xAA55
Length 0x4D (39424 bytes)
Initialization entry 0xCB0100B8 //mov ax, 0x100 retf
Reserved 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
Reserved 0x00 0x00 0x00 0x00 0x00 0x00 0x15
PCI Data Offset 0x00A0
Expansion Header Offset 0x019A
PCI Data Structure: 0x000D00A0
50 43 49 52 86 80 2A 28 1C 00 1C 00 03 00 04 01 PCIR..*(........
4D 00 02 0F 00 80 4D 00 M.....M.
Signature PCIR
Vendor ID 0x8086 - Intel Corporation
Device ID 0x282A
Product Data 0x001C
Structure Length 0x001C
Structure Revision 0x03
Class Code 0x00 0x04 0x01
Image Length 0x004D
Revision Level 0x0F02
Code Type 0x00
Indicator 0x80
Reserved 0x004D
PnP Expansion Header: 0x000D019A
24 50 6E 50 01 02 BA 01 01 06 00 00 00 00 C2 00 $PnP............
D4 00 00 04 01 C4 90 1A 00 00 00 00 00 00 00 00 ................
Signature $PnP
Revision 0x01
Length 0x02 (32 bytes)
Next Header 0x01BA
Reserved 0x01
Checksum 0x06
Device ID 0x00000000
Manufacturer 0x00C2 - Intel Corporation //location 0xD00C2
Product Name 0x00D4 - SanDisk X400 M.2 2280 256GB //location 0xD00D4
Device Type Code 0x00 0x04 0x01
Device Indicators 0xC4
Boot Connection Vector 0x1A90
Disconnect Vector 0x0000
Bootstrap Entry Vector 0x0000
Reserved 0x0000
Resource info. vector 0x0000
PnP Expansion Header: 0x000D01BA
24 50 6E 50 01 02 00 00 02 9B 00 00 00 00 C2 00 $PnP............
F5 00 00 04 01 C4 94 1A 00 00 00 00 00 00 00 00 ................
Signature $PnP
Revision 0x01
Length 0x02 (32 bytes)
Next Header 0x0000 //next PnP expansion header contains nothing useful
Reserved 0x02
Checksum 0x9B
Device ID 0x00000000
Manufacturer 0x00C2 - Intel Corporation
Product Name 0x00F5 - ST1000LM035-1RK172
Device Type Code 0x00 0x04 0x01
Device Indicators 0xC4
Boot Connection Vector 0x1A94
Disconnect Vector 0x0000
Bootstrap Entry Vector 0x0000
Reserved 0x0000
Resource info. vector 0x0000
Ethernet Controller
Option ROM Header: 0x000DA000
55 AA 08 E8 76 10 CB 55 BC 01 00 00 00 00 00 00 U...v..U........
00 00 00 00 00 00 20 00 40 00 60 00 ...... .#.`.
Signature 0xAA55
Length 0x08 (4096 bytes)
Initialization entry 0xCB1076E8 //call then far return
Reserved 0x55 0xBC 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x00
Reserved 0x00 0x00 0x00 0x00 0x00
PXEROMID Offset 0x0020 //RWEverything didn't pick it up as a separate field and made it part of the reserved section so I separated it.
PCI Data Offset 0x0040
Expansion Header Offset 0x0060
UNDI ROM ID Structure: 0x000DA020 //not recognised by RW Everything so I parsed it myself
55 4E 44 49 16 08 00 00 01 02 32 0D 00 08 B0 C4 UNDI......2...
80 46 50 43 49 52 ¦-ÇFPCIR
Signature UNDI
StructLength 0x16
Checksum 0x08
StructRev 0x00
UNDIRev 0x00 0x01 0x02
UNDI Loader Offset 0x0D32
StackSize 0x0800
DataSize 0xC4B0
CodeSize 0x4680
BusType PCIR
PCI Data Structure: 0x000DA040
50 43 49 52 EC 10 68 81 00 00 1C 00 03 00 00 02 PCIR..h.........
08 00 01 02 00 80 08 00 ........
Signature PCIR
Vendor ID 0x10EC - Realtek Semiconductor
Device ID 0x8168
Product Data 0x0000
Structure Length 0x001C
Structure Revision 0x03
Class Code 0x00 0x00 0x02
Image Length 0x0008
Revision Level 0x0201
Code Type 0x00
Indicator 0x80
Reserved 0x0008
PnP Expansion Header: 0x000DA060
24 50 6E 50 01 02 00 00 00 D7 00 00 00 00 AF 00 $PnP............
92 01 02 00 00 E4 00 00 00 00 C1 0B 00 00 00 00 ................
Signature $PnP
Revision 0x01
Length 0x02 (32 bytes)
Next Header 0x0000
Reserved 0x00
Checksum 0xD7
Device ID 0x00000000
Manufacturer 0x00AF - Intel Corporation
Product Name 0x0192 - Realtek PXE B02 D00
Device Type Code 0x02 0x00 0x00
Device Indicators 0xE4
Boot Connection Vector 0x0000
Disconnect Vector 0x0000
Bootstrap Entry Vector 0x0BC1 // will be at 0xDABC1
Reserved 0x0000
Resource info. vector 0x0000
On another computer with a legacy BIOS, only the VGA option ROM appears in a scan of this region, and at 0xC0000. No EHCI and no SATA. This suggests to me it's embedded in the BIOS but part of the BIOS code and not as an option ROM, which is known as a BAID in the BBS; the code to initialise the controller and scan for boot devices and enter their info in the IPL table and hook int 13h so the MBR/VBR can access the disk is hardcoded in the BIOS. Also, BCV hook order priorities no longer matter because they're all entered into the IPL table as BAIDs anyway nowadays rather than just disk 80h being bootable (populated by reading from the current disk enumeration no. in the BDA and then filling in that many entries with their respective details acquired from int 13h calls). Presumably, the IPL table contains the disk number to boot from and passes it to the vector in the entry which will be the code shared by all BAIDs which loads the first sector from the disk using int 13h to 0x7c00 checks for a valid MBR and then passes control. The MBR will then move itself away from 0x7c00 and load the active partition's first sector i.e. the VBR to 0x7c00 and pass control to it, which will be the JMP instruction (if its the default windows one. If its GRUB, it will load and pass control to core.img from sectors 1–65). The VBR will then load the IPL in sectors 1–15 which it locates on the disk using the HiddenSectors value in the BPB in the VBR and passes control to it. For further details on Windows boot from this point, see my answer here.
The code in the initialisation entry on the SATA controller is a code that far returns 0x100. This appears to have been modified by the BIOS or option ROM itself after the initialisation has taken place, or maybe just a dud because it's an embedded device that is initialised elsewhere in the BIOS. By definition, the initialisation code is now useless and option ROMs that adhere to DDIM may remove the initialisation code from RAM once it has been initialised and recompute the length and checksum. The video bios even does some negative relative jump. This suggests that the BIOS shadows its video BIOS to a location before C0000h, shadowing partially over the VGA VRAM region, such that the option ROM header appears at C0000h.
'length' appears to be describing the amount of space in the legacy option ROM shadow RAM region it takes up after initialisation of the option ROM.
Here is an example memory map of a split PXE option ROM:
Rather than setting all the XROMBARs in one go such that it includes the pre-initialised length, the BIOS probably loads and initialises one at a time. It can't overwrite / remove the option ROM because routines in the option ROM separate to the IPL code may still be called by the BCV to interact with the hardware. For instance, after the video BIOS initialisation code has run, the PCI configuration space is scanned and the first XROMBAR that returns a length will be set to the end of the video BIOS. Reads are then directed to DMI and writes directed to memory. It then shadows it to memory, redirects reads to RAM, and performs a far call to the initialisation entry. The IPL code then shrinks itself by removing the initialisation code. The BIOS checks the new size of the option ROM and for PnP expansion headers and registers BEVs/BCVs. It then redirects loads back to DMI and loads the next XROMBAR. The BIOS builds the BCV table entry at a time and then executes the BCVs in order of BCV priority.
An option ROM could move the BEV/BCV to a PMM allocation in extended memory, leaving a jump instruction at the BEV/BCV offset but that would break relative addressing of jumps in the BCV/BEV into functions in the rest of the option ROM. It could therefore relocate its whole self to a PMM allocation and reduce the size to just the headers, but clearly this isn't the case with most option ROMs. A BEV does relocate the UNDI driver though.
1: It is not possible to have ROM this big copied into option ROM space. Init size field is 1 byte and it is interpreted as 512 byte increments, that's 255 * 512 = 127KB
2: Too bad, some of them won't be initialized.
3: There are PAMs in the northbridge(intel chipset datasheet). These registers can write protected specific ranges in optional ROM space.
4: Limit counts for VGA too. It just has to start at c0000h, while some NIC can start at.. pfft d0000h as well.
Thank you Pyjong.
You are welcome Pyjong.

What is the exact procedure to perform external authentication?

I am trying to perform external authentication on smart card, I got the 8 byte challenge from the card and then I need to generate the card cryptogram on that 8 bytes.
But I don't know how to perform that cryptogram operation (smartcard tool kit converting 8 bytes to 72 bytes).
The following commands are generated by the tool kit
00 A4 04 00 0C A0 00 00 02 43 00 13 00 00 00 01 04
00 22 41 A4 06 83 01 01 95 01 80
command: 80 84 00 00 08 Response: (8 bytes challenge)
command: 80 82 00 00 48 (72 bytes data)
Can any body say what are the steps to follow to convert 8 byte challenge to 72 bytes ?
Conversion is not exactly the right term. You need to apply the cryptographic algorithm with the correct key to the received challenge. I assume, that an External Authenticate command is performed, but the strange data field length allows no assumption on the algorithm used. Possibly an external challenge is also provided in the command and session keys are established. Since the assumed Get Challenge command and the External Authenticate command have a class byte indicating a proprietary command, ISO 7816-4 won't help here and you need to refer to the card specification. To get knowledge of the key you probably have to sign a non-disclosure agreement with the card issuer.