looking for a code list for cil , as in what does 16, 17, 18 , 19 , 02 do - cil

I was wondering if there is a cil code list, I think 16 = false and 17 = true, but not 100% sure, also think -1 = 0 , but if anyone has a website to help with this that would be great.
16 = false
17 = true ?

You can find a complete listing of all opcodes in Partition III of ECMA-335, but as for the specific instructions you listed:
02 (0x02) = ldarg.0
16 (0x10) = starg.s
17 (0x11) = ldloc.s
18 (0x12) = ldloca.s
19 (0x13) = stloc.s
0x16 = ldc.i4.0 // could be a 0 or false or '\0' ... the exact type depends on how it's used.
0x17 = ldc.i4.1 // could be a 1 or true or '\u0001' ... again, the exact type depends on how it's used.
0x18 = ldc.i4.2
0x19 = ldc.i4.3

Related

why does not readelf report right sizes?

I have a Linux executable file (which is an ELF file) called a.out.
I use
readelf -hl a.out
and get output
ELF Header:
Magic: 7f 45 4c 46 02 01 01 03 00 00 00 00 00 00 00 00
Class: ELF64
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - GNU
ABI Version: 0
Type: EXEC (Executable file)
Machine: Advanced Micro Devices X86-64
Version: 0x1
Entry point address: 0x4008e0
Start of program headers: 64 (bytes into file)
Start of section headers: 910680 (bytes into file)
Flags: 0x0
Size of this header: 64 (bytes)
Size of program headers: 56 (bytes)
Number of program headers: 6
Size of section headers: 64 (bytes)
Number of section headers: 33
Section header string table index: 30
Program Headers:
Type Offset VirtAddr PhysAddr
FileSiz MemSiz Flags Align
LOAD 0x0000000000000000 0x0000000000400000 0x0000000000400000
0x00000000000c9be6 0x00000000000c9be6 R E 200000
LOAD 0x00000000000c9eb8 0x00000000006c9eb8 0x00000000006c9eb8
0x0000000000001c98 0x0000000000003550 RW 200000
NOTE 0x0000000000000190 0x0000000000400190 0x0000000000400190
0x0000000000000044 0x0000000000000044 R 4
TLS 0x00000000000c9eb8 0x00000000006c9eb8 0x00000000006c9eb8
0x0000000000000020 0x0000000000000050 R 8
GNU_STACK 0x0000000000000000 0x0000000000000000 0x0000000000000000
0x0000000000000000 0x0000000000000000 RW 10
GNU_RELRO 0x00000000000c9eb8 0x00000000006c9eb8 0x00000000006c9eb8
0x0000000000000148 0x0000000000000148 R 1
Section to Segment mapping:
Segment Sections...
00 .note.ABI-tag .note.gnu.build-id .rela.plt .init .plt .text __libc_freeres_fn __libc_thread_freeres_fn .fini .rodata __libc_subfreeres __libc_atexit .stapsdt.base __libc_thread_subfreeres .eh_frame .gcc_except_table
01 .tdata .init_array .fini_array .jcr .data.rel.ro .got .got.plt .data .bss __libc_freeres_ptrs
02 .note.ABI-tag .note.gnu.build-id
03 .tdata .tbss
04
05 .tdata .init_array .fini_array .jcr .data.rel.ro .got
According to the above output, this ELF file should have size
ELF header size : 64
+
Section headers : 64*33 = 2112
+
Program headers : 56*6 = 336
+
Segments : 834090 = 0x00000000000c9be6 + 0x0000000000001c98 + 0x0000000000000044 + 0x0000000000000020 + 0x0000000000000148
= 64 + 2112 + 336 + 834090 = 836602
However, /bin/ls reports file size 912792.
where are the 912792 - 836602 = 76190 bytes?
which parts did I forget to count?
UPDATE
according to #Jonathon Reinhart, I re count the size using section sizes instead of segment size by
echo "print 0x020 + 0x024 + 0x0f0 + 0x01a + 0x0a0 + 0x09eda4 + 0x02529 + 0x0de + 0x09 + 0x01d320 + 0x050 + 0x08 + 0x01 + 0x08 + 0x0b04c + 0x0b2 + 0x020 + 0x010 + 0x010 + 0x08 + 0x0e4 + 0x010 + 0x068 + 0x01ad0 + 0x023 + 0x0f18 + 0x0169 + 0x0b100 + 0x0685a" | /usr/bin/python
The section size values used above are extracted from the output of readelf -S a.out and the size of section type of "NOBITS" were not counted.
The above code output 909459 which is bigger than segment size 834090,
however, the total size is still not equal to the file size.
64 + 2112 + 336 + 909459 = 911971 != 912792
still missing 912792 - 911971 = 821 bytes.
update2 [solved]
according to the comment, I have test the offset. The result tells me that the adding pad does exist.

Accessing USB device data based only on DESCRIPTOR HID Report

I have a Digital Sound Level Meter (sonometer) GM1356 with USB. There is some software to handle it on Windows, however I don't have CD and it's not available on the internet. What I want do to is to read it's data about current noise level on Linux.
I found already a library that allows me to do this in a language I know (ruby, libusb). In next step I installed wireshark to check out what it sends do the pc. It doesn't send too much. The most interesting packet I found is DESCRIPTOR HID Report. I wonder what next steps should I take to read data that is interesting for me. How can I determine what requests I should send to get it?
HID Report
Global item (Usage)
Header
.... ..10 = bSize: 2 bytes (2)
.... 01.. = bType: Global (1)
0000 .... = bTag: Usage (0x0)
Usage page: [Vendor-defined] (0xffa0)
Local item (Usage)
Header
.... ..01 = bSize: 1 byte (1)
.... 10.. = bType: Local (2)
0000 .... = bTag: Usage (0x0)
Usage: [Vendor-defined] (0xffa00001)
Main item (Collection)
Header
.... ..01 = bSize: 1 byte (1)
.... 00.. = bType: Main (0)
1010 .... = bTag: Collection (0xa)
Collection type: Application (0x01)
Local item (Usage)
Header
.... ..01 = bSize: 1 byte (1)
.... 10.. = bType: Local (2)
0000 .... = bTag: Usage (0x0)
Usage: [Vendor-defined] (0xffa00002)
Main item (Collection)
Header
.... ..01 = bSize: 1 byte (1)
.... 00.. = bType: Main (0)
1010 .... = bTag: Collection (0xa)
Collection type: Physical (0x00)
Global item (Usage)
Header
.... ..10 = bSize: 2 bytes (2)
.... 01.. = bType: Global (1)
0000 .... = bTag: Usage (0x0)
Usage page: [Vendor-defined] (0xffa1)
Local item (Usage)
Header
.... ..01 = bSize: 1 byte (1)
.... 10.. = bType: Local (2)
0000 .... = bTag: Usage (0x0)
Usage: [Vendor-defined] (0xffa10003)
Local item (Usage)
Header
.... ..01 = bSize: 1 byte (1)
.... 10.. = bType: Local (2)
0000 .... = bTag: Usage (0x0)
Usage: [Vendor-defined] (0xffa10004)
Global item (Logical minimum)
Header
.... ..01 = bSize: 1 byte (1)
.... 01.. = bType: Global (1)
0001 .... = bTag: Logical minimum (0x1)
Logical minimum: 128
Global item (Logical maximum)
Header
.... ..01 = bSize: 1 byte (1)
.... 01.. = bType: Global (1)
0010 .... = bTag: Logical maximum (0x2)
Logical maximum: 127
Global item (Physical minimum)
Header
.... ..01 = bSize: 1 byte (1)
.... 01.. = bType: Global (1)
0011 .... = bTag: Physical minimum (0x3)
Physical minimum: 0
Global item (Physical maximum)
Header
.... ..01 = bSize: 1 byte (1)
.... 01.. = bType: Global (1)
0100 .... = bTag: Physical maximum (0x4)
Physical maximum: 255
Global item (Report size)
Header
.... ..01 = bSize: 1 byte (1)
.... 01.. = bType: Global (1)
0111 .... = bTag: Report size (0x7)
Report size: 8
Global item (Report count)
Header
.... ..01 = bSize: 1 byte (1)
.... 01.. = bType: Global (1)
1001 .... = bTag: Report count (0x9)
Report count: 8
Main item (Input)
Header
.... ..01 = bSize: 1 byte (1)
.... 00.. = bType: Main (0)
1000 .... = bTag: Input (0x8)
.... .... 0 = Data/constant: Data
.... ...1 . = Data type: Variable
.... ..0. . = Coordinates: Absolute
.... .0.. . = Min/max wraparound: No Wrap
.... 0... . = Physical relationship to data: Linear
...0 .... . = Preferred state: Preferred State
..0. .... . = Has null position: No Null position
.0.. .... . = [Reserved]: False
0... .... . = Bits or bytes: Buffered bytes (default, no second byte present)
Local item (Usage)
Header
.... ..01 = bSize: 1 byte (1)
.... 10.. = bType: Local (2)
0000 .... = bTag: Usage (0x0)
Usage: [Vendor-defined] (0xffa10005)
Local item (Usage)
Header
.... ..01 = bSize: 1 byte (1)
.... 10.. = bType: Local (2)
0000 .... = bTag: Usage (0x0)
Usage: [Vendor-defined] (0xffa10006)
Global item (Logical minimum)
Header
.... ..01 = bSize: 1 byte (1)
.... 01.. = bType: Global (1)
0001 .... = bTag: Logical minimum (0x1)
Logical minimum: 128
Global item (Logical maximum)
Header
.... ..01 = bSize: 1 byte (1)
.... 01.. = bType: Global (1)
0010 .... = bTag: Logical maximum (0x2)
Logical maximum: 127
Global item (Physical minimum)
Header
.... ..01 = bSize: 1 byte (1)
.... 01.. = bType: Global (1)
0011 .... = bTag: Physical minimum (0x3)
Physical minimum: 0
Global item (Physical maximum)
Header
.... ..01 = bSize: 1 byte (1)
.... 01.. = bType: Global (1)
0100 .... = bTag: Physical maximum (0x4)
Physical maximum: 255
Global item (Report size)
Header
.... ..01 = bSize: 1 byte (1)
.... 01.. = bType: Global (1)
0111 .... = bTag: Report size (0x7)
Report size: 8
Global item (Report count)
Header
.... ..01 = bSize: 1 byte (1)
.... 01.. = bType: Global (1)
1001 .... = bTag: Report count (0x9)
Report count: 8
Main item (Output)
Header
.... ..01 = bSize: 1 byte (1)
.... 00.. = bType: Main (0)
1001 .... = bTag: Output (0x9)
.... .... 0 = Data/constant: Data
.... ...1 . = Data type: Variable
.... ..0. . = Coordinates: Absolute
.... .0.. . = Min/max wraparound: No Wrap
.... 0... . = Physical relationship to data: Linear
...0 .... . = Preferred state: Preferred State
..0. .... . = Has null position: No Null position
.0.. .... . = (Non)-volatile: Non Volatile
0... .... . = Bits or bytes: Buffered bytes (default, no second byte present)
Main item (End collection)
Header
.... ..00 = bSize: 0 bytes (0)
.... 00.. = bType: Main (0)
1100 .... = bTag: End collection (0xc)
Main item (End collection)
Header
.... ..00 = bSize: 0 bytes (0)
.... 00.. = bType: Main (0)
1100 .... = bTag: End collection (0xc)
When you decode the HID descriptor it will show the packet formats. Unfortunately, in this case the usage pages are vendor-defined so it is not possible to say exactly how each usage is to be interpreted.
I decoded it using hidrdd (disclaimer: I wrote it, but it is free open source so I have no conflict of interest) as:
//--------------------------------------------------------------------------------
// Decoded Application Collection
//--------------------------------------------------------------------------------
/*
06 A0FF (GLOBAL) USAGE_PAGE 0xFFA0 Vendor-defined
09 01 (LOCAL) USAGE 0xFFA00001 <-- Warning: Undocumented usage (document it by inserting 0001 into file FFA0.conf)
A1 01 (MAIN) COLLECTION 0x01 Application (Usage=0xFFA00001: Page=Vendor-defined, Usage=, Type=) <-- Error: COLLECTION must be preceded by a known USAGE
09 02 (LOCAL) USAGE 0xFFA00002 <-- Warning: Undocumented usage (document it by inserting 0002 into file FFA0.conf)
A1 00 (MAIN) COLLECTION 0x00 Physical (Usage=0xFFA00002: Page=Vendor-defined, Usage=, Type=) <-- Error: COLLECTION must be preceded by a known USAGE
06 A1FF (GLOBAL) USAGE_PAGE 0xFFA1 Vendor-defined
09 03 (LOCAL) USAGE 0xFFA10003 <-- Warning: Undocumented usage (document it by inserting 0003 into file FFA1.conf)
09 04 (LOCAL) USAGE 0xFFA10004 <-- Warning: Undocumented usage (document it by inserting 0004 into file FFA1.conf)
15 80 (GLOBAL) LOGICAL_MINIMUM 0x80 (-128)
25 7F (GLOBAL) LOGICAL_MAXIMUM 0x7F (127)
35 00 (GLOBAL) PHYSICAL_MINIMUM 0x00 (0) <-- Info: Consider replacing 35 00 with 34
45 FF (GLOBAL) PHYSICAL_MAXIMUM 0xFF (-1)
75 08 (GLOBAL) REPORT_SIZE 0x08 (8) Number of bits per field
95 08 (GLOBAL) REPORT_COUNT 0x08 (8) Number of fields
81 02 (MAIN) INPUT 0x00000002 (8 fields x 8 bits) 0=Data 1=Variable 0=Absolute 0=NoWrap 0=Linear 0=PrefState 0=NoNull 0=NonVolatile 0=Bitmap <-- Error: PHYSICAL_MAXIMUM (-1) is less than PHYSICAL_MINIMUM (0)
09 05 (LOCAL) USAGE 0xFFA10005 <-- Warning: Undocumented usage (document it by inserting 0005 into file FFA1.conf)
09 06 (LOCAL) USAGE 0xFFA10006 <-- Warning: Undocumented usage (document it by inserting 0006 into file FFA1.conf)
15 80 (GLOBAL) LOGICAL_MINIMUM 0x80 (-128) <-- Redundant: LOGICAL_MINIMUM is already -128
25 7F (GLOBAL) LOGICAL_MAXIMUM 0x7F (127) <-- Redundant: LOGICAL_MAXIMUM is already 127
35 00 (GLOBAL) PHYSICAL_MINIMUM 0x00 (0) <-- Redundant: PHYSICAL_MINIMUM is already 0 <-- Info: Consider replacing 35 00 with 34
45 FF (GLOBAL) PHYSICAL_MAXIMUM 0xFF (-1) <-- Redundant: PHYSICAL_MAXIMUM is already -1
75 08 (GLOBAL) REPORT_SIZE 0x08 (8) Number of bits per field <-- Redundant: REPORT_SIZE is already 8
95 08 (GLOBAL) REPORT_COUNT 0x08 (8) Number of fields <-- Redundant: REPORT_COUNT is already 8
91 02 (MAIN) OUTPUT 0x00000002 (8 fields x 8 bits) 0=Data 1=Variable 0=Absolute 0=NoWrap 0=Linear 0=PrefState 0=NoNull 0=NonVolatile 0=Bitmap <-- Error: PHYSICAL_MAXIMUM (-1) is less than PHYSICAL_MINIMUM (0)
C0 (MAIN) END_COLLECTION Physical <-- Warning: Physical units are still in effect PHYSICAL(MIN=0,MAX=-1) UNIT(0x,EXP=0)
C0 (MAIN) END_COLLECTION Application <-- Warning: Physical units are still in effect PHYSICAL(MIN=0,MAX=-1) UNIT(0x,EXP=0)
*/
//--------------------------------------------------------------------------------
// Vendor-defined inputReport (Device --> Host)
//--------------------------------------------------------------------------------
typedef struct
{
// No REPORT ID byte
// Collection: CA: CP:
int8_t VEN_0003; // Usage 0xFFA10003: , Value = -128 to 127, Physical = (Value + 128) x -1 / 255
int8_t VEN_0004[7]; // Usage 0xFFA10004: , Value = -128 to 127, Physical = (Value + 128) x -1 / 255
} inputReport_t;
//--------------------------------------------------------------------------------
// Vendor-defined outputReport (Device <-- Host)
//--------------------------------------------------------------------------------
typedef struct
{
// No REPORT ID byte
// Collection: CA: CP:
int8_t VEN_0005; // Usage 0xFFA10005: , Value = -128 to 127, Physical = (Value + 128) x -1 / 255
int8_t VEN_0006[7]; // Usage 0xFFA10006: , Value = -128 to 127, Physical = (Value + 128) x -1 / 255
} outputReport_t;
As you can see, the above HID descriptor has some issues (for example, physical maximum 45 FF is -1, but I think they meant 255 - which should be represented as 46 FF 00) but the problem remains that it tells you nothing about the meaning of the usages. BTW, even Wireshark has not reported the logical minimum correctly: 15 80 is -128 not 128.
All we can tell from it is that the reports are 8-bytes long and that the first byte seems to be some kind of id (well, its usage is different from the remaining 7 bytes).
Only the vendor's driver knows how to interpret the reports, but with a sufficient number of Wireshark packet captures obtained under controlled conditions you may be able reverse engineer a workable interpretation.
Sorry, but that's the best I can do with this.
I bought a decibelimeter too, which happens to be compatible with your model. I am currently trying to port this code to a bash script: https://github.com/dobra-noc/gm1356 which works fine for me with my device (which btw isn't even the gm1356) and I'm guessing it will work for you too.

How to convert two bytes to floating point number

I have some legacy files that need mined for data. The files were created by Lotus123 Release 4 for DOS. I'm trying to read the files faster by parsing the bytes rather than using Lotus to open the files.
Dim fileBytes() As Byte = My.Computer.FileSystem.ReadAllBytes(fiPath)
'I loop through all the data getting first/second bytes for each value
do ...
Dim FirstByte As Int16 = Convert.ToInt16(fileBytes(Index))
Dim SecondByte As Int16 = Convert.ToInt16(fileBytes(Index + 1))
loop ...
I can get integer values like this:
Dim value As Int16 = BitConverter.ToInt16(fileBytes, Index + 8) / 2
But floating numbers are more complicated. Only the smaller numbers are stored with two bytes. Larger values take 10 bytes, but that's another question. Here we only have smaller values with two bytes. Here are some sample values. I entered the byte values into Excel and use the =DEC2BIN() to convert to binary adding zeros on the left as needed to get 8 bits.
First Second
Byte Byte Value First Byte 2nd Byte
7 241 = -1.2 0000 0111 1111 0001
254 255 = -1 1111 1110 1111 1111
9 156 = -0.8 0000 1001 1001 1100
9 181 = -0.6 0000 1001 1011 0101
9 206 = -0.4 0000 1001 1100 1110
9 231 = -0.2 0000 1001 1110 0111
13 0 = 0 0000 1101 0000 0000
137 12 = 0.1 1000 1001 0000 1100
9 25 = 0.2 0000 1001 0001 1001
137 37 = 0.3 1000 1001 0010 0101
9 50 = 0.4 0000 1001 0011 0010
15 2 = 0.5 0000 1111 0000 0010
9 75 = 0.6 0000 1001 0100 1011
137 87 = 0.7 1000 1001 0101 0111
9 100 = 0.8 0000 1001 0110 0100
137 112 = 0.9 1000 1001 0111 0000
2 0 = 1 0000 0010 0000 0000
199 13 = 1.1 1100 0111 0000 1101
7 15 = 1.2 0000 0111 0000 1111
71 16 = 1.3 0100 0111 0001 0000
135 17 = 1.4 1000 0111 0001 0001
15 6 = 1.5 0000 1111 0000 0110
7 20 = 1.6 0000 0111 0001 0100
71 21 = 1.7 0100 0111 0001 0101
135 22 = 1.8 1000 0111 0001 0110
199 23 = 1.9 1100 0111 0001 0111
4 0 = 2 0000 0100 0000 0000
I'm hoping for a simple conversion method. Or maybe it'll be more complicated.
I looked at BCD: "BCD was used in many early decimal computers, and is implemented in the instruction set of machines such as the IBM System/360 series" and Intel BCD opcode
I do not know if this is BCD or what it is. How do I convert the two bits into a floating point number?
I used the information from the website pointed out by Andrew Morton in comments. Basically the stored 16-bit quantity consists of either a 15-bit two's complement integer (when the lsb is 0) or a 12-bit two's complement integer plus a processing code indicating a scale factor to be applied to that integer (when the lsb is 1). I am not familiar with vb.net so am providing ISO-C code here. Program below successfully decodes all the data provided in the question.
Note: I am converting to an 8-byte double in code below, while the question suggests that the original conversion may have been to a 10-byte long double format (the 80-bit extended-precision format of the 8087 math coprocessor). It would seem like a good idea to try more test data to achieve full coverage of the eight scaling codes: Large integers like 1,000,000 and 1,000,000,000; decimal fractions like 0.0003, 0.000005, and 0.00000007; and binary fractions like 0.125 (1/8) and 0.046875 (3/64).
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
typedef struct {
uint8_t byte1;
uint8_t byte2;
} num;
num data[] =
{
{ 7, 241}, {254, 255}, { 9, 156}, { 9, 181}, { 9, 206}, { 9, 231},
{ 13, 0}, {137, 12}, { 9, 25}, {137, 37}, { 9, 50}, { 15, 2},
{ 9, 75}, {137, 87}, { 9, 100}, {137, 112}, { 2, 0}, {199, 13},
{ 7, 15}, { 71, 16}, {135, 17}, { 15, 6}, { 7, 20}, { 71, 21},
{135, 22}, {199, 23}, { 4, 0}
};
int data_count = sizeof (data) / sizeof (data[0]);
/* define operators that may look more familiar to vb.net programmers */
#define XOR ^
#define MOD %
int main (void)
{
int i;
uint8_t b1, b2;
uint16_t h, code;
int32_t n;
double r;
for (i = 0; i < data_count; i++) {
b1 = data[i].byte1;
b2 = data[i].byte2;
/* data word */
h = ((uint16_t)b2 * 256) + b1;
/* h<0>=1 indicates stored integer needs to be scaled */
if ((h MOD 2) == 1) {
/* extract scaling code in h<3:1> */
code = (h / 2) MOD 8;
/* scaled 12-bit integer in h<15:4>. Extract, sign-extend to 32 bits */
n = (int32_t)((((uint32_t)h / 16) XOR 2048) - 2048);
/* convert integer to floating-point */
r = (double)n;
/* scale based on scaling code */
switch (code) {
case 0x0: r = r * 5000; break;
case 0x1: r = r * 500; break;
case 0x2: r = r / 20; break;
case 0x3: r = r / 200; break;
case 0x4: r = r / 2000; break;
case 0x5: r = r / 20000; break;
case 0x6: r = r / 16; break;
case 0x7: r = r / 64; break;
};
} else {
/* unscaled 15-bit integer in h<15:1>. Extract, sign extend to 32 bits */
n = (int32_t)((((uint32_t)h / 2) XOR 16384) - 16384);
/* convert integer to floating-point */
r = (double)n;
}
printf ("[%3d,%3d] n=%08x r=% 12.8f\n", b1, b2, n, r);
}
return EXIT_SUCCESS;
}
The output of this program is as follows:
[ 7,241] n=ffffff10 r= -1.20000000
[254,255] n=ffffffff r= -1.00000000
[ 9,156] n=fffff9c0 r= -0.80000000
[ 9,181] n=fffffb50 r= -0.60000000
[ 9,206] n=fffffce0 r= -0.40000000
[ 9,231] n=fffffe70 r= -0.20000000
[ 13, 0] n=00000000 r= 0.00000000
[137, 12] n=000000c8 r= 0.10000000
[ 9, 25] n=00000190 r= 0.20000000
[137, 37] n=00000258 r= 0.30000000
[ 9, 50] n=00000320 r= 0.40000000
[ 15, 2] n=00000020 r= 0.50000000
[ 9, 75] n=000004b0 r= 0.60000000
[137, 87] n=00000578 r= 0.70000000
[ 9,100] n=00000640 r= 0.80000000
[137,112] n=00000708 r= 0.90000000
[ 2, 0] n=00000001 r= 1.00000000
[199, 13] n=000000dc r= 1.10000000
[ 7, 15] n=000000f0 r= 1.20000000
[ 71, 16] n=00000104 r= 1.30000000
[135, 17] n=00000118 r= 1.40000000
[ 15, 6] n=00000060 r= 1.50000000
[ 7, 20] n=00000140 r= 1.60000000
[ 71, 21] n=00000154 r= 1.70000000
[135, 22] n=00000168 r= 1.80000000
[199, 23] n=0000017c r= 1.90000000
[ 4, 0] n=00000002 r= 2.00000000
Just a VB.Net translation of the C code posted by njuffa.
The original structure has been substituted with a Byte array and the numeric data type adapted to .Net types. That's all.
Dim data As Byte(,) = New Byte(,) {
{7, 241}, {254, 255}, {9, 156}, {9, 181}, {9, 206}, {9, 231}, {13, 0}, {137, 12}, {9, 25},
{137, 37}, {9, 50}, {15, 2}, {9, 75}, {137, 87}, {9, 100}, {137, 112}, {2, 0}, {199, 13},
{7, 15}, {71, 16}, {135, 17}, {15, 6}, {7, 20}, {71, 21}, {135, 22}, {199, 23}, {4, 0}
}
Dim byte1, byte2 As Byte
Dim word, code As UShort
Dim nValue As Integer
Dim result As Double
For i As Integer = 0 To (data.Length \ 2 - 1)
byte1 = data(i, 0)
byte2 = data(i, 1)
word = (byte2 * 256US) + byte1
If (word Mod 2) = 1 Then
code = (word \ 2US) Mod 8US
nValue = ((word \ 16) Xor 2048) - 2048
Select Case code
Case 0 : result = nValue * 5000
Case 1 : result = nValue * 500
Case 2 : result = nValue / 20
Case 3 : result = nValue / 200
Case 4 : result = nValue / 2000
Case 5 : result = nValue / 20000
Case 6 : result = nValue / 16
Case 7 : result = nValue / 64
End Select
Else
'unscaled 15-bit integer in h<15:1>. Extract, sign extend to 32 bits
nValue = ((word \ 2) Xor 16384) - 16384
result = nValue
End If
Console.WriteLine($"[{byte1,3:D}, {byte2,3:D}] number = {nValue:X8} result ={result,12:F8}")
Next

H264 encoding and decoding using Videotoolbox

I was testing the encoding and decoding using videotoolbox, to convert the captured frames to H264 and using that data to display it in AVSampleBufferdisplayLayer.
error here while decompress CMVideoFormatDescriptionCreateFromH264ParameterSets with error code -12712
I follow this code from mobisoftinfotech.com
status = CMVideoFormatDescriptionCreateFromH264ParameterSets(
kCFAlloc‌​‌ atorDefault, 2,
(const uint8_t const)parameterSetPointers,
parameterSetSizes, 4, &_formatDesc);
videoCompressionTest; can anyone figure out the problem?
I am not sure if you did figure out the problem yet. However, I found 2 places in your code that leading to the error. After fixed them and run locally your test app, it seems to be working fine. (Tested with Xcode 9.4.1, MacOS 10.13)
The first one is in -(void)CompressAndConvertToData:(CMSampleBufferRef)sampleBuffer method where the while loop should be like this
while (bufferOffset < blockBufferLength - AVCCHeaderLength) {
// Read the NAL unit length
uint32_t NALUnitLength = 0;
memcpy(&NALUnitLength, bufferDataPointer + bufferOffset, AVCCHeaderLength);
// Convert the length value from Big-endian to Little-endian
NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);
// Write start code to the elementary stream
[elementaryStream appendBytes:startCode length:startCodeLength];
// Write the NAL unit without the AVCC length header to the elementary stream
[elementaryStream appendBytes:bufferDataPointer + bufferOffset + AVCCHeaderLength
length:NALUnitLength];
// Move to the next NAL unit in the block buffer
bufferOffset += AVCCHeaderLength + NALUnitLength;
}
uint8_t *bytes = (uint8_t*)[elementaryStream bytes];
int size = (int)[elementaryStream length];
[self receivedRawVideoFrame:bytes withSize:size];
The second place is the decompression code where you process for NALU type 8, the block of code in if(nalu_type == 8) statement. This is a tricky one.
To fix it, update
for (int i = _spsSize + 12; i < _spsSize + 50; i++)
to
for (int i = _spsSize + 12; i < _spsSize + 12 + 50; i++)
And you are freely to remove this hack
//was crashing here
if(_ppsSize == 0)
_ppsSize = 4;
Why? Lets print out the frame packet format.
po frame
▿ 4282 elements
- 0 : 0
- 1 : 0
- 2 : 0
- 3 : 1
- 4 : 39
- 5 : 100
- 6 : 0
- 7 : 30
- 8 : 172
- 9 : 86
- 10 : 193
- 11 : 112
- 12 : 247
- 13 : 151
- 14 : 64
- 15 : 0
- 16 : 0
- 17 : 0
- 18 : 1
- 19 : 40
- 20 : 238
- 21 : 60
- 22 : 176
- 23 : 0
- 24 : 0
- 25 : 0
- 26 : 1
- 27 : 6
- 28 : 5
- 29 : 35
- 30 : 71
- 31 : 86
- 32 : 74
- 33 : 220
- 34 : 92
- 35 : 76
- 36 : 67
- 37 : 63
- 38 : 148
- 39 : 239
- 40 : 197
- 41 : 17
- 42 : 60
- 43 : 209
- 44 : 67
- 45 : 168
- 46 : 0
- 47 : 0
- 48 : 3
- 49 : 0
- 50 : 0
- 51 : 3
- 52 : 0
- 53 : 2
- 54 : 143
- 55 : 92
- 56 : 40
- 57 : 1
- 58 : 221
- 59 : 204
- 60 : 204
- 61 : 221
- 62 : 2
- 63 : 0
- 64 : 76
- 65 : 75
- 66 : 64
- 67 : 128
- 68 : 0
- 69 : 0
- 70 : 0
- 71 : 1
- 72 : 37
- 73 : 184
- 74 : 32
- 75 : 1
- 76 : 223
- 77 : 205
- 78 : 248
- 79 : 30
- 80 : 231
… more
The first NALU start code if (nalu_type == 7) is 0, 0, 0, 1 from index of 15 to 18. The next 0, 0, 0, 1 (from 23 to 26) is type 6, type 8 NALU start code is from 68 to 71. That why I modify the for loop a bit to scan from start index (_spsSize + 12) with a range of 50.
I haven't fully tested your code to make sure encode and decode work properly as expected. However, I hope this finding would help you.
By the way, if there is any misunderstanding, I would love to learn from your comments.

What's the difference between C# SHA256Managed and cryptopp::SHA256

I'm trying to replace MS SHA256Managed function by cryptopp::SHA256.
Here's the C# code
internal byte[] GenerateKey(byte[] keySeed, Guid keyId)
{
byte[] truncatedKeySeed = new byte[30];
Array.Copy(keySeed, truncatedKeySeed, truncatedKeySeed.Length);
Console.WriteLine("Key Seed");
foreach (byte b in truncatedKeySeed)
{
Console.Write("0x" + Convert.ToString(b, 16) + ",");
}
Console.WriteLine();
//
// Get the keyId as a byte array
//
byte[] keyIdAsBytes = keyId.ToByteArray();
SHA256Managed sha_A = new SHA256Managed();
sha_A.TransformBlock(truncatedKeySeed, 0, truncatedKeySeed.Length, truncatedKeySeed, 0);
sha_A.TransformFinalBlock(keyIdAsBytes, 0, keyIdAsBytes.Length);
byte[] sha_A_Output = sha_A.Hash;
Console.WriteLine("sha_a:" + sha_A_Output.Length);
foreach (byte b in sha_A_Output)
{
Console.Write("0x" + Convert.ToString(b, 16) + ",");
}
Console.WriteLine();
.....
}
The output result:
Key Seed
0x5d,0x50,0x68,0xbe,0xc9,0xb3,0x84,0xff,0x60,0x44,0x86,0x71,0x59,0xf1,0x6d,0x6b,0x75,0x55,0x44,0xfc,0xd5,0x11,0x69,0x89,0xb1,0xac,0xc4,0x27,0x8e,0x88
Key ID
0x39,0x68,0xe1,0xb6,0xbd,0xee,0xf6,0x4f,0xab,0x76,0x8d,0x48,0x2d,0x8d,0x2b,0x6a,
sha_a:32
0x7b,0xec,0x8f,0x1b,0x60,0x4e,0xb4,0xab,0x3b,0xb,0xbd,0xb8,0x71,0xd6,0xba,0x71,0xb1,0x26,0x41,0x7d,0x99,0x55,0xdc,0x8e,0x64,0x76,0x15,0x23,0x1b,0xab,0x76,0x62,
The replacement function by Crypto++ as follows:
byte key_seed[] = { 0x5D, 0x50, 0x68, 0xBE, 0xC9, 0xB3, 0x84, 0xFF, 0x60, 0x44, 0x86, 0x71, 0x59, 0xF1, 0x6D, 0x6B, 0x75, 0x55, 0x44, 0xFC,0xD5, 0x11, 0x69, 0x89, 0xB1, 0xAC, 0xC4, 0x27, 0x8E, 0x88 };
byte key_id[] = { 0x39,0x68,0xe1,0xb6,0xbd,0xee,0xf6,0x4f,0xab,0x76,0x8d,0x48,0x2d,0x8d,0x2b,0x6a };
byte truncated_key_seed[sizeof(key_seed)];
memset( truncated_key_seed,0,sizeof(truncated_key_seed));
memcpy( key_seed, truncated_key_seed, sizeof(key_seed) );
byte output[SHA256::DIGESTSIZE];
memset(output,0,sizeof(output));
SHA256 sha_a;
sha_a.Update(truncated_key_seed,sizeof(key_seed));
sha_a.Update(key_id,sizeof(key_id));
sha_a.Final(output);
printf("size:%lu\n",sizeof(output));
PrintHex(output,sizeof(output));
But the output hash value is
DB 36 C9 F6 F7 29 6D 6F 52 21 DA 9F 55 1D AE BC 3E 5A 15 DF E1 37 07 EE 8F BC 73 61 5F D6 E1 C3
It's different with sha_a result by C#.
From the MSDN and Cryptopp reference, the SHA256Managed::TransformBlock and SHA256Managed::TransformFinalBlock did the same thing with Cryptopp::Update and Cryptopp::Final.
What's the difference between SHA256Managed and cryptopp::SHA256 cause this result?
Seems like a bug in your code to me.
sha_a.Update(truncated_key_seed,sizeof(key_seed));
Make sure that the truncated_key_seed is identical in both versions, especially the bytes not included in the original key_seed...