Strange results for hard coded bytes in objective c - objective-c

I'm debugging some CCCrypto code in Cocoa and I noticed the IV I'm hard setting (yes, I know it should be randomized) is giving me weird results when I debug.
This is my IV:
unsigned char iv[17] = {0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x00};
And this is the contents of my memory after I step past the above line when debugging:
(lldb) p iv
(unsigned char [17]) $1 = "\x01\x02\x03\x04\x05\x06\a\b\t\n\v\f\r\x0e\x0f\x10"
Where do \a\b\t\n\v\f\r come from? I fully expected to see
\x07\x08\x09\x0a\x0b\x0c\x0d

Those are the normal representations of those character values in ASCII and UTF-8. Remember, these may be eight-bit integers, but they're being interpreted as characters in a string. The character with the value 7 is '\a', also known as the "bell character" (it is supposed to make your computer beep if you print it). The character with the value 8 is backspace, or '\b'. 9 is tab, or '\t'. Then come line feed, vertical tab, form feed and carriage return.

Related

USB HID difference between "Get Input Report" and "Input report"

I'm implementing a custom HID device that has the following interface:
0x06, 0xA0, 0xFF, // Usage Page (Vendor Defined 0xFFA0)
0x09, 0x01, // Usage (0x01)
0xA1, 0x01, // Collection (Application)
0x85, 0x01, // Report ID (1)
0x15, 0x00, // Logical Minimum (0)
0x26, 0x01, 0x00, // Logical Maximum (1)
0x75, 0x08, // Report Size (8)
0x95, 0x01, // Report Count (1)
0x91, 0x02, // Output (Data,Var,Abs,No Wrap,Linear,Preferred State,No Null Position,Non-volatile)
0x85, 0x01, // Report ID (1)
0x15, 0x00, // Logical Minimum (0)
0x26, 0x01, 0x00, // Logical Maximum (1)
0x75, 0x01, // Report Size (1)
0x95, 0x02, // Report Count (2)
0x81, 0x02, // Input (Data,Var,Abs,No Wrap,Linear,Preferred State,No Null Position)
0x75, 0x06, // Report Size (6)
0x95, 0x01, // Report Count (1)
0x81, 0x01, // Input (Const,Array,Abs,No Wrap,Linear,Preferred State,No Null Position)
0x85, 0x03, // Report ID (1)
0x15, 0x00, // Logical Minimum (0)
0x26, 0xFF, 0xFF, // Logical Maximum (65535)
0x75, 0x10, // Report Size (16)
0x95, 0x01, // Report Count (1)
0xB1, 0x02, // Feature (Data,Var,Abs,No Wrap,Linear,Preferred State,No Null Position,Non-volatile)
0xC0, // End Collection
When I plug this device into the computer through a USB logic analyzer I see it enumerate, then something (I'm not sure what, any ideas?) uses the HID report descriptor to intelligently grab a bunch of reports:
(control)(endpoint 0) Get Input Report[1]
(control)(endpoint 0) Get Feature Report[1]
The "Get Input Report" entry confused me as I thought input reports were sent via an interrupt transfer. If I use usbhid's hid_write, I see the following entry, so I must be at least half right about input reports being sent via interrupt transfers...:
(interrupt)(endpoint 1) Input Report[1]"
I have been unable to find information that describes the difference between control "get input reports" and interrupt "input report" transfers which I am hoping one of you will know about.
Why do control "get input reports" exist?
Why not just have the spec mandate a "get feature report" entry exist for every "input report" id entry?
Why is whatever is grabbing input/feature reports for every defined input/feature report using a control transfer for the input reports vs. an interrupt transfer?
Have a look at USB HID v1.1. There is definition for Get_Report request on page 51:
This request is useful at initialization time for absolute items and for determining the state of feature items. This request is not intended to be used for polling the device state on a regular basis.
Here, this is exactly what the driver is doing: it is retrieving the various reports to initialize its current state. Note the host cannot request the device to send a report on its interrupt pipe. Hence the request on control pipe.
Chapter 4.4 explains various endpoints usage.
Also note feature reports and input reports do not address the same data, even if they have the same report ID (report IDs are per report type).

STM32 USB HID reports

For the past two weeks I have been trying to configure my card (STM32F4) to dialogue with USB HID with a PC under Windows 7. I make success with this descriptor:
__ALIGN_BEGIN static uint8_t HID_ReportDesc[HID_MOUSE_REPORT_DESC_SIZE] __ALIGN_END =
{
0x06, 0xFF, 0x00, // USAGE_PAGE (Vendor Page: 0xFF00)
0x09, 0x01, // USAGE (Demo Kit)
0xa1, 0x01, // COLLECTION (Application)
//0x85, 0x01, // REPORT_ID (1)
0x09, 0x02, // USAGE (DATA)
0x15, 0x00, // LOGICAL_MINIMUM (0)
0x26, 0xff,0x00, // LOGICAL_MAXIMUM (255)
0x75, 0x08, // REPORT_SIZE (8)
0x95, 0x04, // REPORT_COUNT (4)
0x81, 0x02, // INPUT (Data,Var,Abs,Vol)
0xc0 // END_COLLECTION
};
Like this, Windows recognizeS my card like a compliant HID component.
Now if I want to send 32 bit data like a uint32, Windows recognizes the card, but it sees an error to tell that it can't start it! My descriptor:
__ALIGN_BEGIN static uint8_t HID_MOUSE_ReportDesc[HID_MOUSE_REPORT_DESC_SIZE] __ALIGN_END =
{
0x06, 0xFF, 0x00, // USAGE_PAGE (Vendor Page: 0xFF00)
0x09, 0x01, // USAGE (Demo Kit)
0xa1, 0x01, // COLLECTION (Application)
0x09, 0x02, // USAGE (DATA)
0x15, 0x00, // LOGICAL_MINIMUM (0)
0x27, 0xff,0xff,0xff,0xff, // LOGICAL_MAXIMUM (65535)
0x75, 0x20, // REPORT_SIZE (32)
0x95, 0x01, // REPORT_COUNT (1)
0x81, 0x02, // INPUT (Data,Var,Abs,Vol)
0xc0 // END_COLLECTION
};
I didn't understand why it din't work!
After that, I use USBlyser to scan my other USB device on my PC, and I take this descriptor about my spacespilot 3D mouse:
Endpoint Descriptor 83 3 In, Interrupt, 16 ms
Offset Field Size Value Description
0 bLength 1 07h
1 bDescriptorType 1 05h Endpoint
2 bEndpointAddress 1 83h 3 In
3 bmAttributes 1 03h Interrupt
1..0: Transfer Type ......11 Interrupt
7..2: Reserved 000000..
4 wMaxPacketSize 2 0007h 7 bytes
6 bInterval 1 08h 16 ms
Interface 1 HID Report Descriptor Multi-Axis Controller
Item Tag (Value) Raw Data
Usage Page (Generic Desktop) 05 01
Usage (Multi-Axis Controller) 09 08
Collection (Application) A1 01
Collection (Physical) A1 00
Report ID (1) 85 01
Logical Minimum (-500) 16 0C FE
Logical Maximum (500) 26 F4 01
Physical Minimum (-32768) 36 00 80
Physical Maximum (32767) 46 FF 7F
Usage (X) 09 30
Usage (Y) 09 31
Usage (Z) 09 32
Report Size (16) 75 10
Report Count (3) 95 03
Input (Data,Var,Abs,NWrp,Lin,Pref,NNul,Bit) 81 02
End Collection C0
If I try this, it works fine:
__ALIGN_BEGIN static uint8_t HID_MOUSE_ReportDesc[HID_MOUSE_REPORT_DESC_SIZE] __ALIGN_END =
{
0x05, 0x01, // Usage Page (Generic Desktop)
0x09, 0x08, //Usage (Multi-Axis Controller)
0xa1, 0x01, // COLLECTION (Application)
0xa1, 0x00, // Collection (Physical)
0x85, 0x01, // Report ID (1)
0x16,0x0c,0xfe, // Logical minimum (-500)
0x26,0xf4,0x01, // Logical maximum (500)
0x35,0x00, // Physical Minimum (0)
0x46,0xff,0x00, // Physical Maximum (255)
0x09,0x30, // Usage(X)
0x09,0x31, // Usage(Y)
0x09,0x32, // Usage(Z)
0x09,0x33, // Usage(RX)
0x09,0x34, // Usage(RY)
0x09,0x35, // //Usage(RZ)
0x75, 0x08, // REPORT_SIZE (16)
0x95, 0x06, // REPORT_COUNT (6)
0x81, 0x02, // INPUT (Data,Var,Abs,Vol)
0xc0, // END_COLLECTION
0xc0 // END_COLLECTION
};
But if I try the same descriptor that my 3D mouse:
__ALIGN_BEGIN static uint8_t HID_MOUSE_ReportDesc[HID_MOUSE_REPORT_DESC_SIZE] __ALIGN_END =
{
0x05, 0x01, // Usage Page (Generic Desktop)
0x09, 0x08, // Usage (Multi-Axis Controller)
0xa1, 0x01, // COLLECTION (Application)
0xa1, 0x00, // Collection (Physical)
0x85, 0x01, // Report ID (1)
0x16,0x0c,0xfe, // Logical minimum (-500)
0x26,0xf4,0x01, // Logical maximum (500)
0x35,0x00,0x80, // Physical Minimum (-32768)
0x46,0xff,0x7f, // Physical Maximum (32767)
0x09,0x30, // Usage(X)
0x09,0x31, // Usage(Y)
0x09,0x32, // Usage(Z)
0x75, 0x10, // REPORT_SIZE (16)
0x95, 0x03, // REPORT_COUNT (3)
0x81, 0x02, // INPUT (Data,Var,Abs,Vol)
0xc0, // END_COLLECTION
0xc0 // END_COLLECTION
};
Windows gives me the same error, and it can't start the device!!!
What is wrong? Do I need a special driver for Windows to send 32 bit data (int32)? For information, I use the HID library for my application PC side.
How can I resolve this?
The LOGICAL_MAXIMUM four-byte descriptor tag (0x27) is only valid up to 0x7FFFFFFF as it is describing a maximum for a signed int data field. If you want to describe an unsigned int data field you would need to use the LOGICAL_MAXIMUM 8 byte descriptor tag like this:
0x28, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00
Your second example would work, I think. Except for the typo for the Physical maximum descriptor tag. It should be 0x36, not 0x35.
When I tried your last descriptor (using my Arduino USBComposite library for STM32F1), Windows gave me an error about the descriptor having an unknown item. But when I changed the 0x35 in the Physical Minimum line to 0x36, Windows recognized the item.

trouble with {base64 encoding of sha256 result} password generator

I'd like to encode a SHA-256 output (in hex) to give me a 16 character base64 string for password using purposes. base64 appears to not be what I would think it should be. Here's what I want.
(0) "00000000000000000000000000000000" -> "AAAAAAAAAAAAAAAA"
(64) "00000000000000000000000000000040" -> "AAAAAAAAAAAAAABA"
(255) "000000000000000000000000000000ff" -> "AAAAAAAAAAAAAAC/"
(2^16+16) "0000000000000000000000000000100f" -> "AAAAAAAAAAAAAPAP"
heres what I get
base64encode("00000000000000000000000000000000")
"MDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAA"
This result is consistent with online converters, using R, etc.,
so obviously base64 is not what I think it is and I'm trying to do something else. If I'm not trying to "encode in base 64", what am I trying to do?
My favorite is:
> base64encode(64)
[1] "AAAAAAAAUEA="
> base64encode("64")
[1] "NjQA"
which just baffles me
Base64 works on byte arrays, so in your example, base64encode("00000000000000000000000000000000") is encoding the string value "000...", as a byte array. Since the byte value for the character "0" is 0x30, you're essentially encoding a byte array consisting of 32 0x30 bytes (and by the looks of it, a null terminator (0x00) at the end).
If you're trying to get a 16 character output, you need to encode a 12 byte input (since Base64 produces 4 characters of output for 3 bytes of input), e.g. (You don't give the language, so I'm guessing at the syntax for byte arrays as { 0x.., 0x.., ... }):
base64encode({0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00})
= "AAAAAAAAAAAAAAAA"
base64encode({0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01})
= "AAAAAAAAAAAAAAAB"
base64encode({0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40})
= "AAAAAAAAAAAAAABA"
etc...
The SHA-xxx algorithms should naturally produce a byte-array output, so you should be able to take the appropriate number of bytes from it, and pass them to base64encode. If your SHA method produces a hex string output, then you'll need to convert the hex string back to a byte array before passing Base64 encoding.

Authentication with Mifare Ultralight C

I am wanting to authenticate with the Mifare Ultralight C tag, I have written the key into the tag already, the key is 0x"000102030405060708090A0B0C0D0E0F", so I have to write backward like this into pages of 0x2C to 0x2F
uint8_t buffer1[] = {0x07, 0x06, 0x05, 0x04};
uint8_t buffer2[] = {0x03, 0x02, 0x01, 0x00};
uint8_t buffer3[] = {0x0F, 0x0E, 0x0D, 0x0C};
uint8_t buffer4[] = {0x0B, 0x0A, 0x09, 0x08};
// write the data
success = nfc->mifareultralight_WritePage(0x2C, buffer1);
success &= nfc->mifareultralight_WritePage(0x2D, buffer2);
success &= nfc->mifareultralight_WritePage(0x2E, buffer3);
success &= nfc->mifareultralight_WritePage(0x2F, buffer4);`
But I don't know the command for authentication, can anyone tell me the authentication steps and commands to authenticate with this key?

Objective-C: How to correctly initialize char[]?

I need to take a char [] array and copy it's value to another, but I fail every time.
It works using this format:
char array[] = { 0x00, 0x00, 0x00 }
However, when I try to do this:
char array[] = char new_array[];
it fails, even though the new_array is just like the original.
Any help would be kindly appreciated.
Thanks
To copy at runtime, the usual C method is to use the strncpy or memcpy functions.
If you want two char arrays initialized to the same constant initializer at compile time, you're probably stuck with using #define:
#define ARRAY_INIT { 0x00, 0x00, 0x00 }
char array[] = ARRAY_INIT;
char new_array[] = ARRAY_INIT;
Thing is, this is rarely done because there's usually a better implementation.
EDIT: Okay, so you want to copy arrays at runtime. This is done with memcpy, part of <string.h> (of all places).
If I'm reading you right, you have initial conditions like so:
char array[] = { 0x00, 0x00, 0x00 };
char new_array[] = { 0x01, 0x00, 0xFF };
Then you do something, changing the arrays' contents, and after it's done, you want to set array to match new_array. That's just this:
memcpy(new_array, array, sizeof(array));
/* ^ ^ ^
| | +--- size in bytes
| +-------------- source array
+-------------------------destination array
*/
The library writers chose to order the arguments with the destination first because that's the same order as in assignment: destination = source.
There is no language-level built-in means to copy arrays in C, Objective-C, or C++ with primitive arrays like this. C++ encourages people to use std::vector, and Objective-C encourages the use of NSArray.
I'm still not sure of exactly what you want, though.