How is flv format contained in RTMP? - rtmp

I'm using wireshark to inspect the packets but I'm confused by how exactly flv format is followed in RTMP streaming. FLV doc specifies the tag to be: tag type, datasize, timestamp, timestampExtended, streamID, VideoTagHeader, but I'm getting
[fmt] [timestamp 2000] [body size], [typeID (tag type)] [stream ID]
04 00 07 d0 00 00 2c 09 01 00 00 00
When streaming, does the FLV timestamp just use the RTMP timestamp? and therefore doesn't follow the big-endian format but rather use the RTMP extended timestamp?
So how exactly does FLV container get used in RTMP video streaming?

Related

Sending `Encrypted Extension` and `Server Finished` in one handshake message. Is it mandatory in TLS1.3?

As per RFC 8446 (TLSv1.3) [https://www.rfc-editor.org/rfc/rfc8446]
Encrypted Extension and Finished are two different handshake messages.
But in RFC 8448 (Example Handshake Traces for TLS 1.3) [https://www.rfc-editor.org/rfc/rfc8448]
In all examples of this trace document, Encrypted Extension (message type 0x08) and Server Finished
(message type 0x14) messages are concatenated and send together.
Refer page number 23 and 24 of RFC 8446.
payload (80 octets): **08** 00 00 28 00 26 00 0a 00 14 00 12 00 1d 00
17 00 18 00 19 01 00 01 01 01 02 01 03 01 04 00 1c 00 02 40 01
00 00 00 00 00 2a 00 00 **14** 00 00 20 48 d3 e0 e1 b3 d9 07 c6 ac
ff 14 5e 16 09 03 88 c7 7b 05 c0 50 b6 34 ab 1a 88 bb d0 dd 1a
34 b2
I know by adding two handshake messages (if they are sent by one entity immediately one after other) together will increase performance and RFC 8446 provide this provision.
But is this really mandatory by any server implementation to send Encrypted Extension and Server Finished messages together?
Or Server and Client should support both implementations i.e.
a) Sending Encrypted Extension and Server Finished messages separately one by one.
b) Sending Encrypted Extension and Server Finished message together in one handshake message.
TLS is send over TCP. TCP is a byte stream which has no concept of messages and thus has no concept of "messages send together" too. Two send at the application level or from within the TLS stack might end up within the same TCP packet the same as one send might be spread over multiple TCP packets.
In other words: since the TCP layer underlying TLS is only a byte stream which can be packetized in arbitrary ways not controlled by the upper layer, it would be impossible to follow a mandatory requirement of sending multiple TLS messages in the same TCP packet.

Extracting GPS metadata from hex of JPG image

I am trying to extract GPS metadata from hex following this tutorial, but cannot understand why at the end the latitude and longitude have length 24 and values 42 and 73:
http://itbrigadeinc.com/post/2012/03/06/Anatomy-of-a-JPG-image.aspx
http://www.itbrigadeinc.com/post/2012/03/16/Seeing-the-EXIF-data-for-a-JPG-image.aspx
I found the tags of latitude and longitude (00 02 00 05 00 00 00 03 00 00 02 42) and (00 04 00 05 00 00 00 03 00 00 02 5A). As I understood, if count = 3, then the values of both of them should follow in the last 4 bytes of tags. but 02 42 and 02 5A are not "42" and "73"...
Who could explain me what is wrong?
Please, don't recommend any tools - I need to do it manually.
You need to also consider the size of each value. The count is three, but the size of each is larger than one byte. Therefore it won't fit in the four bytes, and those four bytes represent an offset to the value.
GPS data is usually stored as three rational numbers, where each rational number is two 32-bit integers (numerator, denominator). Therefore you have three values for latitude, but each is 8 bytes. The 24 bytes won't fit within the TIFF tag, so it is stored somewhere else in the file, and the four bytes you're seeing are a pointer to it. You need to look into the spec to find out where that pointer is relative to, as it's probably not the start of the file.
Check out my metadata extractor libraries (in Java and C#) for reference.
Apparently the 24 bit data type is a PropertyTagTypeRational
https://msdn.microsoft.com/en-us/library/ms534414(v=vs.85).aspx
Specifies that the value data member is an array of pairs of unsigned long integers. Each pair represents a fraction; the first integer is the numerator and the second integer is the denominator.
Mostly gotten from: Getting GPS data from an image's EXIF in C#
This bit of python code might have a good hint too at how you can decode the data http://eran.sandler.co.il/2011/05/20/extract-gps-latitude-and-longitude-data-from-exif-using-python-imaging-library-pil/

Creating Custom UDP Packet

I am trying to interface with my internet connected gas fire. The manufacturer has told me that I can communicate with it on UDP port 3300.
He says I can send the packet with the information "SEARCH_FOR_FIRES" to the local subnet address to receive a response.
The packets should be composed in 15 bytes, as follows:
Byte 1: StartByte(0x47 'G')
Byte 2: Command ID
Byte 3: DataSize
Byte 4-13: Data
Byte 15: CRC
Byte 15: End Byte (0x46 'F')
They give, 0x473100000000000000000000003146 as am example. 31 is the command ID for the "SEARCH_FOR_FIRES" command.
The only problem is I have no idea how to create these packets... I'm using the Windows verson of Packet Sender and it gives me the option of inputting ASCII or HEX values. So far I have:
HEX: 47 31 00 03 01 46
ASCII: G1\00\03\01F
But none of them seem to work, but I don't know how to find the HEX equivalent of 0x473100000000000000000000003146.
Can someone help?
Well, that sounds weird, but hex equivalent of 0x473100000000000000000000003146 is... 0x473100000000000000000000003146 itself :) "0x" stands for hex representation, it's followed by hex numbers, so you need to pass "47 31 00 00 00 00 00 00 00 00 00 00 00 31 46" to packet sender.
By the way, do you know what to expect from device on successful packet sending? Should device perform some noticeable indication of processing "SEARCH_FOR_FIRES" command? It's possible, that device will silently send some report in UDP packet back to you, so you may need to setup network capturing (e.g. wireshark) to see and analyze response.

How is PCI ROM shadowed?

in several resources I found that: ROM image must be copied to RAM to 000C0000h through 000DFFFFh. If the Class Code indicates that this is the VGA device ROM, its code must be copied into memory starting at location 000C0000h.
1: What if I have PCI hungry hungry hippo card that has ROM bigger than 128KB?
2: What if I have regular PCI device that has ROM 64KB but I have 4 of them? Are they loaded sequentially into this memory range? If so (though I doubt that) how is the code image preserved between init and boot phase?
3: What would happen if BIOS decided to go nonconformist and designated a different memory location? Why is it important to use this range anyway?
4: how the hell is regular case different from VGA interface? Is it just the limit that makes the difference?
non-UEFI BIOS Option ROMs aren't typically that large and had size restrictions, but UEFI drivers can be larger and placed above 1MiB in a UEFI bios, which switches out of real mode during the SEC phase. You can disable certain option ROMs that you don't want to be shadowed to the space next boot.
This small range was used because non-UEFI BIOSes operate in real mode (interrupt table (IVT as opposed to IDT) starts at 0h as a guarantee to option ROMs) and therefore can only access the first 1MiB of memory and need memory for other things (BIOS data, BIOS, stack/heap for BIOS and option ROMs). Although, most BIOSes ended up using unreal mode or protected mode with virtual 8086 so they could use the IVT and address 32 bits at the same time, so nothing is stopping the BIOS from shadowing option ROMs elsewhere in RAM and the BIOS scanning that region (I have read that E0000–EFFFF can also be used if the BIOS is only 64KiB), except this gets complicated if the option ROMs themselves look for other option ROMs like the UNDI ROM looking for the BC ROM on a PxE NIC. They also used PMM services to allocate 16/32 bit heap addresses. UEFI no longer uses BIOS interrupt services; it uses EFI functions.
The option ROM for onboard graphics is in the BIOS itself and its address is known. It is classically hardcoded to be moved to C0000h (C000:0000h in real mode segmentation notation), or indeed wherever it wants. The BIOS makes sure at least that it is the first option ROM that is executed, but if it knows it moved it to D0000h for instance then it knows where to pass control. If it were allowed to be at a variable address rather than a fixed address then it could end up not being able to fit in the range if other PCIe cards are shadowed first. Also, it would have to scan for the VGA bios class code first and then scan the range again or keep an internal table of where option ROMs are, which is more convoluted than a simple routine that linearly iterates across the space and the video BIOS always happens to be executed first. So it needs a fixed address, and if it makes the fixed address D0000h rather than C0000h and places other option ROMs around it, then external fragmentation of the space will occur.
The PAMs aren't in the northbridge anymore (which doesn't exist on modern Intel CPUs), they're used as part of the SADs' configuration in the L3 cache slice, which decode the address and either send a request to the memory controller, DMI, PCIe link or processor graphics. The BIOS can set the PAM so that reads for a certain range are sent to DMI and writes are sent to the memory controller. That way the BIOS can shadow onto itself and PCI(e) XROMBARs can be set to the same exact address to which they're going to be shadowed.
On my system (Kaby lake + C230 series PCH + UEFI Secure Boot disabled), there are 3 option ROMs in the C0000h–DFFFFh region.
VGA BIOS
Option ROM Header: 0x000C0000
55 AA 80 E9 91 F9 30 30 30 30 30 30 30 30 30 30 U.....0000000000
30 30 A8 2F E9 B1 2E AF 40 00 90 0B 00./....#...
Signature 0xAA55
Length 0x80 (65536 bytes)
Initialization entry 0x30F991E9 //software read this wrong it's actually 0xF991E9, which is a 16 bit relative jump 0xF991; -1647
Reserved 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30
Reserved 0x30 0xA8 0x2F 0xE9 0xB1 0x2E 0xAF
PCI Data Offset 0x0040 //offset is from start of OpROM header
Expansion Header Offset 0x0B90
PCI Data Structure: 0x000C0040
50 43 49 52 86 80 06 04 1C 00 1C 00 03 00 00 03 PCIR............
80 00 00 00 00 80 80 00 ........
Signature PCIR
Vendor ID 0x8086 - Intel Corporation
Device ID 0x0406
Product Data 0x001C
Structure Length 0x001C
Structure Revision 0x03
Class Code 0x00 0x00 0x03
Image Length 0x0080
Revision Level 0x0000
Code Type 0x00
Indicator 0x80
Reserved 0x0080
SATA Controller (PnP Expansion Header for each disk)
Option ROM Header: 0x000D0000
55 AA 4D B8 00 01 CB 00 00 00 00 00 00 00 00 00 U.M.............
00 00 00 00 00 00 00 15 A0 00 9A 01 ............
Signature 0xAA55
Length 0x4D (39424 bytes)
Initialization entry 0xCB0100B8 //mov ax, 0x100 retf
Reserved 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
Reserved 0x00 0x00 0x00 0x00 0x00 0x00 0x15
PCI Data Offset 0x00A0
Expansion Header Offset 0x019A
PCI Data Structure: 0x000D00A0
50 43 49 52 86 80 2A 28 1C 00 1C 00 03 00 04 01 PCIR..*(........
4D 00 02 0F 00 80 4D 00 M.....M.
Signature PCIR
Vendor ID 0x8086 - Intel Corporation
Device ID 0x282A
Product Data 0x001C
Structure Length 0x001C
Structure Revision 0x03
Class Code 0x00 0x04 0x01
Image Length 0x004D
Revision Level 0x0F02
Code Type 0x00
Indicator 0x80
Reserved 0x004D
PnP Expansion Header: 0x000D019A
24 50 6E 50 01 02 BA 01 01 06 00 00 00 00 C2 00 $PnP............
D4 00 00 04 01 C4 90 1A 00 00 00 00 00 00 00 00 ................
Signature $PnP
Revision 0x01
Length 0x02 (32 bytes)
Next Header 0x01BA
Reserved 0x01
Checksum 0x06
Device ID 0x00000000
Manufacturer 0x00C2 - Intel Corporation //location 0xD00C2
Product Name 0x00D4 - SanDisk X400 M.2 2280 256GB //location 0xD00D4
Device Type Code 0x00 0x04 0x01
Device Indicators 0xC4
Boot Connection Vector 0x1A90
Disconnect Vector 0x0000
Bootstrap Entry Vector 0x0000
Reserved 0x0000
Resource info. vector 0x0000
PnP Expansion Header: 0x000D01BA
24 50 6E 50 01 02 00 00 02 9B 00 00 00 00 C2 00 $PnP............
F5 00 00 04 01 C4 94 1A 00 00 00 00 00 00 00 00 ................
Signature $PnP
Revision 0x01
Length 0x02 (32 bytes)
Next Header 0x0000 //next PnP expansion header contains nothing useful
Reserved 0x02
Checksum 0x9B
Device ID 0x00000000
Manufacturer 0x00C2 - Intel Corporation
Product Name 0x00F5 - ST1000LM035-1RK172
Device Type Code 0x00 0x04 0x01
Device Indicators 0xC4
Boot Connection Vector 0x1A94
Disconnect Vector 0x0000
Bootstrap Entry Vector 0x0000
Reserved 0x0000
Resource info. vector 0x0000
Ethernet Controller
Option ROM Header: 0x000DA000
55 AA 08 E8 76 10 CB 55 BC 01 00 00 00 00 00 00 U...v..U........
00 00 00 00 00 00 20 00 40 00 60 00 ...... .#.`.
Signature 0xAA55
Length 0x08 (4096 bytes)
Initialization entry 0xCB1076E8 //call then far return
Reserved 0x55 0xBC 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x00
Reserved 0x00 0x00 0x00 0x00 0x00
PXEROMID Offset 0x0020 //RWEverything didn't pick it up as a separate field and made it part of the reserved section so I separated it.
PCI Data Offset 0x0040
Expansion Header Offset 0x0060
UNDI ROM ID Structure: 0x000DA020 //not recognised by RW Everything so I parsed it myself
55 4E 44 49 16 08 00 00 01 02 32 0D 00 08 B0 C4 UNDI......2...
80 46 50 43 49 52 ¦-ÇFPCIR
Signature UNDI
StructLength 0x16
Checksum 0x08
StructRev 0x00
UNDIRev 0x00 0x01 0x02
UNDI Loader Offset 0x0D32
StackSize 0x0800
DataSize 0xC4B0
CodeSize 0x4680
BusType PCIR
PCI Data Structure: 0x000DA040
50 43 49 52 EC 10 68 81 00 00 1C 00 03 00 00 02 PCIR..h.........
08 00 01 02 00 80 08 00 ........
Signature PCIR
Vendor ID 0x10EC - Realtek Semiconductor
Device ID 0x8168
Product Data 0x0000
Structure Length 0x001C
Structure Revision 0x03
Class Code 0x00 0x00 0x02
Image Length 0x0008
Revision Level 0x0201
Code Type 0x00
Indicator 0x80
Reserved 0x0008
PnP Expansion Header: 0x000DA060
24 50 6E 50 01 02 00 00 00 D7 00 00 00 00 AF 00 $PnP............
92 01 02 00 00 E4 00 00 00 00 C1 0B 00 00 00 00 ................
Signature $PnP
Revision 0x01
Length 0x02 (32 bytes)
Next Header 0x0000
Reserved 0x00
Checksum 0xD7
Device ID 0x00000000
Manufacturer 0x00AF - Intel Corporation
Product Name 0x0192 - Realtek PXE B02 D00
Device Type Code 0x02 0x00 0x00
Device Indicators 0xE4
Boot Connection Vector 0x0000
Disconnect Vector 0x0000
Bootstrap Entry Vector 0x0BC1 // will be at 0xDABC1
Reserved 0x0000
Resource info. vector 0x0000
On another computer with a legacy BIOS, only the VGA option ROM appears in a scan of this region, and at 0xC0000. No EHCI and no SATA. This suggests to me it's embedded in the BIOS but part of the BIOS code and not as an option ROM, which is known as a BAID in the BBS; the code to initialise the controller and scan for boot devices and enter their info in the IPL table and hook int 13h so the MBR/VBR can access the disk is hardcoded in the BIOS. Also, BCV hook order priorities no longer matter because they're all entered into the IPL table as BAIDs anyway nowadays rather than just disk 80h being bootable (populated by reading from the current disk enumeration no. in the BDA and then filling in that many entries with their respective details acquired from int 13h calls). Presumably, the IPL table contains the disk number to boot from and passes it to the vector in the entry which will be the code shared by all BAIDs which loads the first sector from the disk using int 13h to 0x7c00 checks for a valid MBR and then passes control. The MBR will then move itself away from 0x7c00 and load the active partition's first sector i.e. the VBR to 0x7c00 and pass control to it, which will be the JMP instruction (if its the default windows one. If its GRUB, it will load and pass control to core.img from sectors 1–65). The VBR will then load the IPL in sectors 1–15 which it locates on the disk using the HiddenSectors value in the BPB in the VBR and passes control to it. For further details on Windows boot from this point, see my answer here.
The code in the initialisation entry on the SATA controller is a code that far returns 0x100. This appears to have been modified by the BIOS or option ROM itself after the initialisation has taken place, or maybe just a dud because it's an embedded device that is initialised elsewhere in the BIOS. By definition, the initialisation code is now useless and option ROMs that adhere to DDIM may remove the initialisation code from RAM once it has been initialised and recompute the length and checksum. The video bios even does some negative relative jump. This suggests that the BIOS shadows its video BIOS to a location before C0000h, shadowing partially over the VGA VRAM region, such that the option ROM header appears at C0000h.
'length' appears to be describing the amount of space in the legacy option ROM shadow RAM region it takes up after initialisation of the option ROM.
Here is an example memory map of a split PXE option ROM:
Rather than setting all the XROMBARs in one go such that it includes the pre-initialised length, the BIOS probably loads and initialises one at a time. It can't overwrite / remove the option ROM because routines in the option ROM separate to the IPL code may still be called by the BCV to interact with the hardware. For instance, after the video BIOS initialisation code has run, the PCI configuration space is scanned and the first XROMBAR that returns a length will be set to the end of the video BIOS. Reads are then directed to DMI and writes directed to memory. It then shadows it to memory, redirects reads to RAM, and performs a far call to the initialisation entry. The IPL code then shrinks itself by removing the initialisation code. The BIOS checks the new size of the option ROM and for PnP expansion headers and registers BEVs/BCVs. It then redirects loads back to DMI and loads the next XROMBAR. The BIOS builds the BCV table entry at a time and then executes the BCVs in order of BCV priority.
An option ROM could move the BEV/BCV to a PMM allocation in extended memory, leaving a jump instruction at the BEV/BCV offset but that would break relative addressing of jumps in the BCV/BEV into functions in the rest of the option ROM. It could therefore relocate its whole self to a PMM allocation and reduce the size to just the headers, but clearly this isn't the case with most option ROMs. A BEV does relocate the UNDI driver though.
1: It is not possible to have ROM this big copied into option ROM space. Init size field is 1 byte and it is interpreted as 512 byte increments, that's 255 * 512 = 127KB
2: Too bad, some of them won't be initialized.
3: There are PAMs in the northbridge(intel chipset datasheet). These registers can write protected specific ranges in optional ROM space.
4: Limit counts for VGA too. It just has to start at c0000h, while some NIC can start at.. pfft d0000h as well.
Thank you Pyjong.
You are welcome Pyjong.

What is the exact procedure to perform external authentication?

I am trying to perform external authentication on smart card, I got the 8 byte challenge from the card and then I need to generate the card cryptogram on that 8 bytes.
But I don't know how to perform that cryptogram operation (smartcard tool kit converting 8 bytes to 72 bytes).
The following commands are generated by the tool kit
00 A4 04 00 0C A0 00 00 02 43 00 13 00 00 00 01 04
00 22 41 A4 06 83 01 01 95 01 80
command: 80 84 00 00 08 Response: (8 bytes challenge)
command: 80 82 00 00 48 (72 bytes data)
Can any body say what are the steps to follow to convert 8 byte challenge to 72 bytes ?
Conversion is not exactly the right term. You need to apply the cryptographic algorithm with the correct key to the received challenge. I assume, that an External Authenticate command is performed, but the strange data field length allows no assumption on the algorithm used. Possibly an external challenge is also provided in the command and session keys are established. Since the assumed Get Challenge command and the External Authenticate command have a class byte indicating a proprietary command, ISO 7816-4 won't help here and you need to refer to the card specification. To get knowledge of the key you probably have to sign a non-disclosure agreement with the card issuer.