wolftpm all time recieving TPM_RC_BAD_TAG - cryptography

Trying to use infenion slb9670 with wolftpm. When porting lib to custom spi functions and os recieving 0x01e(dec 30) which means TPM_RC_BAD_TAG. Is my Spi connection correct if I have already received caps?
(Same code works fine on STM32f7 board with STM HAL spi implementation)
Thanks
rc = TPM2_Init(&tpm2Ctx, TPM2_IoCb, userCtx);
if (rc != 0)
{
tst_printf("\r\nTPM init failed! rc = %i;", rc);
break;
}
else
{
tst_printf("\r\nTPM init success!");
tst_printf("\r\nTPM2: Caps 0x%08x, Did 0x%04x, Vid 0x%04x, Rid 0x%2x \n",
tpm2Ctx.caps,
tpm2Ctx.did_vid >> 16,
tpm2Ctx.did_vid & 0xFFFF,
tpm2Ctx.rid);
}
/* define the default session auth */
XMEMSET(tpm_session, 0, sizeof(tpm_session));
tpm_session[0].sessionHandle = TPM_RS_PW;
TPM2_SetSessionAuth(tpm_session);
if (rc != TPM_RC_SUCCESS &&
rc != TPM_RC_INITIALIZE /* TPM_RC_INITIALIZE = Already started */ ) {
tst_printf("TPM2_SetSessionAuth failed 0x%x: %s\n", rc, TPM2_GetRCString(rc));
break;
}
Startup_In startup;
XMEMSET(&startup, 0, sizeof(Startup_In));
startup.startupType = TPM_SU_STATE;
rc = TPM2_Startup(&startup);
if (rc != TPM_RC_SUCCESS &&
rc != TPM_RC_INITIALIZE /* TPM_RC_INITIALIZE = Already started */ ) {
tst_printf("TPM2_Startup failed %i: %s\n", rc, TPM2_GetRCString(rc));
//break;
}
tst_printf("\r\nTPM2_Startup pass!\f");
Output:
TPM init success!
TPM2: Caps 0x30000697, Did 0x001b, Vid 0x15d1, Rid 0x10
TPM2_Startup failed 30: Unknown
edited
Values of cmd in TPM2_TIS_SendCommand:
80 01 00 00 00 0c 00 00 01 44 00 00 (working example)
00 00 00 00 00 0c 00 00 01 44 00 00 (my case)
80 01 - TPM_ST_NO_SESSIONS which has to be added by TPM2_Packet_Finalize!

The mistake was in functions which are preparing packet. My version of IAR compiler cannot handle __REV() for 16 bit values. I used small macro for handling, now everything works fine.

Related

How to inflate a gzip with a really old zlib?

I'm using a PPC platform that has an older version of zlib ported to it. Is it possible to use zlib 1.1.3 to inflate an archive made with gzip 1.5?
$ gzip --list --verbose vmlinux.z
method crc date time compressed uncompressed ratio uncompressed_name
defla 12169518 Apr 29 13:00 4261643 9199404 53.7% vmlinux
The first 32 bytes of the archive are
00000000 1f 8b 08 08 29 f4 8a 60 00 03 76 6d 6c 69 6e 75 |....)..`..vmlinu|
00000010 78 00 ec 9a 7f 54 1c 55 96 c7 6f 75 37 d0 fc 70 |x....T.U..ou7..p|
I've tried using this code (where source is a pointer to the first byte at 1f 8b) with the three options A, B, and C for the WBIT initialization.
int ZEXPORT gunzip (dest, destLen, source, sourceLen)
Bytef *dest;
uLongf *destLen;
const Bytef *source;
uLong sourceLen;
{
z_stream stream;
int err;
stream.next_in = (Bytef*)source;
stream.avail_in = (uInt)sourceLen;
/* Check for source > 64K on 16-bit machine: */
if ((uLong)stream.avail_in != sourceLen) return Z_BUF_ERROR;
stream.next_out = dest;
stream.avail_out = (uInt)*destLen;
if ((uLong)stream.avail_out != *destLen) return Z_BUF_ERROR;
stream.zalloc = (alloc_func)my_alloc;
stream.zfree = (free_func)my_free;
/* option A */
err = inflateInit(&stream);
/* option B */
err = inflateInit2(&stream, 15 + 16);
/* option C */
err = inflateInit2(&stream, -MAX_WBITS);
if (err != Z_OK) return err;
err = inflate(&stream, Z_FINISH);
if (err != Z_STREAM_END) {
inflateEnd(&stream);
return err == Z_OK ? Z_BUF_ERROR : err;
}
*destLen = stream.total_out;
err = inflateEnd(&stream);
return err;
}
Option A:
zlib inflate() fails with error Z_DATA_ERROR. "unknown compression method"
z_stream.avail_in = 4261640
z_stream.total_in = 1
z_stream.avail_out = 134152192
z_stream.total_out = 0
Option B:
zlib inflateInit2_() fails at line 118 with a Z_STREAM_ERROR.
/* set window size */
if (w < 8 || w > 15)
{
inflateEnd(z);
return Z_STREAM_ERROR;
}
Option C:
zlib inflate() fails with error Z_DATA_ERROR. "invalid block type"
z_stream.avail_in = 4261640
z_stream.total_in = 1
z_stream.avail_out = 134152192
z_stream.total_out = 0
Your option B would work for zlib 1.2.1 or later.
With zlib 1.1.3, there are two ways.
Use the gzopen(), gzread(), and gzclose() to read the gzip stream from a file and decompress into memory.
To decompress from the gzip stream in memory, use your option C, raw inflate, after manually decoding the gzip header. Use crc32() to calculate the CRC-32 of the decompressed data as you inflate it. When the inflation completes, manually decode the gzip trailer, checking the CRC-32 and size of the decompressed data.
Manual decoding of the gzip header and trailer is simple to implement. See RFC 1952 for the description of the header and trailer.

CubeMX-generated USB HID device sends wrong data when both endpoint and PMA address are changed

I'm debugging a problem with a composite device that I'm creating, and have recreated the issue in freshly-CubeMX-generated HID-only code, to make it easier to resolve.
I've added small amount of code to main() to let me send USB HID mouse-clicks, and flash an LED, when the blue-button is pressed.
...
uint8_t click_report[CLICK_REPORT_SIZE] = {0};
extern USBD_HandleTypeDef hUsbDeviceFS;
...
int main(void)
{
...
while (1)
{
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
if(HAL_GPIO_ReadPin(B1_GPIO_Port, B1_Pin) == GPIO_PIN_SET){
HAL_GPIO_WritePin(LD4_GPIO_Port, LD4_Pin, GPIO_PIN_SET);
click_report[0] = 1; // send button press
USBD_HID_SendReport(&hUsbDeviceFS, click_report, CLICK_REPORT_SIZE);
HAL_Delay(50);
click_report[0] = 0; // send button release
USBD_HID_SendReport(&hUsbDeviceFS, click_report, CLICK_REPORT_SIZE);
HAL_Delay(200);
HAL_GPIO_WritePin(LD4_GPIO_Port, LD4_Pin, GPIO_PIN_RESET);
}
}
I am using Wireshark and usbmon (on Ubuntu 16.04) to look at the packets which my STM32F3DISCOVERY board sends.
With this freshly-generated code, I can see URB_INTERRUPT packets being sent from 3.23.1. (Only the last part of that address, the endpoint, is relevant.)
The packet contents are:
01 00 00 00
00
00 00 00 00
00
as expected.
(The 5-byte click_reports are fragmented into 4-byte and 1-byte messages, as there is a 4-byte maximum packet size for HID.)
I then changed HID_EPIN_ADDR in usdb_hid.h from 0x81 to 0x83, to make the device use endpoint 3 for HID messages, instead of endpoint 1.
//#define HID_EPIN_ADDR 0x81U
#define HID_EPIN_ADDR 0x83U
With this change, everything continued to work, with the expected change that packets are being sent from x.x.3. The packets still contain:
01 00 00 00
00
00 00 00 00
00
As far as I can see, this should not work, as I haven't yet allocated an address for endpoint 3 (0x83) in the PMA (packet memory area).
I do this, by editing usb_conf.c:
/* USER CODE BEGIN EndPoint_Configuration */
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x00 , PCD_SNG_BUF, 0x18);
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x80 , PCD_SNG_BUF, 0x58);
/* USER CODE END EndPoint_Configuration */
/* USER CODE BEGIN EndPoint_Configuration_HID */
//HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x81 , PCD_SNG_BUF, 0x100);
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x83 , PCD_SNG_BUF, 0x180);
/* USER CODE END EndPoint_Configuration_HID */
return USBD_OK;
}
Now, when I send the same 01 00 00 00 00 and 00 00 00 00 00 click_reports I see packet contents of:
58 00 2c 00
58
58 00 2c 00
58
I have traced the contents of the non-PMA buffer right down to USB_WritePMA in stm32f3xx_ll_usb.
The sending code (in stm32f3xx_ll_usb) is:
/* IN endpoint */
if (ep->is_in == 1U)
{
/*Multi packet transfer*/
if (ep->xfer_len > ep->maxpacket)
{
len = ep->maxpacket;
ep->xfer_len -= len;
}
else
{
len = ep->xfer_len;
ep->xfer_len = 0U;
}
/* configure and validate Tx endpoint */
if (ep->doublebuffer == 0U)
{
USB_WritePMA(USBx, ep->xfer_buff, ep->pmaadress, (uint16_t)len);
PCD_SET_EP_TX_CNT(USBx, ep->num, len);
}
else
{
Why is the data on the wire not the data that I give USB_WritePMA, once I've added HAL_PCDEx_PMAConfig(... for endpoint address 0x83?
Update:
If I change usb_conf.c to let endpoint address 0x83 use the PMA address that is normally used by 0x81:
/* USER CODE BEGIN EndPoint_Configuration */
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x00 , PCD_SNG_BUF, 0x18);
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x80 , PCD_SNG_BUF, 0x58);
/* USER CODE END EndPoint_Configuration */
/* USER CODE BEGIN EndPoint_Configuration_HID */
//HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x81 , PCD_SNG_BUF, 0x100);
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x83 , PCD_SNG_BUF, 0x100);
/* USER CODE END EndPoint_Configuration_HID */
the packets on the wire are still corrupted:
58 00 2c 00
58
58 00 2c 00
58
If I return usb_conf.c to its initial, generated, state (where 0x83 has no PMA address, and 0x81 uses 0x100):
/* USER CODE BEGIN EndPoint_Configuration */
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x00 , PCD_SNG_BUF, 0x18);
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x80 , PCD_SNG_BUF, 0x58);
/* USER CODE END EndPoint_Configuration */
/* USER CODE BEGIN EndPoint_Configuration_HID */
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x81 , PCD_SNG_BUF, 0x100);
//HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x83 , PCD_SNG_BUF, 0x100);
/* USER CODE END EndPoint_Configuration_HID */
the output works as expected:
01 00 00 00
00
00 00 00 00
00
Update 2:
I added a break-point in USB_ActivateEndpoint() in stm32f3xx_ll_usb.c.
Surprisingly this is only ever called for endpoint 0.
Therefore, the ep->pmaadress (sic) is never "written into hardware", and only used in higher-level code.
This must mean that the values of pmaadress for the endpoints are set to some default value, and I do not know the default value for endpoint 0x83 and so can't set it.
When I return to work on Friday, I will add debugging to read-out the default values. If they do not exist, I will be very confused.
Update 3:
I added the following debugging:
uint16_t *tx_addr_ptr(USB_TypeDef *USBx, uint8_t ep_num) {
register uint16_t *_wRegValPtr;
register uint32_t _wRegBase = (uint32_t)USBx;
_wRegBase += (uint32_t)(USBx)->BTABLE;
_wRegValPtr = (uint16_t *)(_wRegBase + 0x400U + (((uint32_t)(ep_num) * 8U) * 2U));
return _wRegValPtr;
}
uint16_t *rx_addr_ptr(USB_TypeDef *USBx, uint8_t ep_num) {
register uint16_t *_wRegValPtr;
register uint32_t _wRegBase = (uint32_t)USBx;
_wRegBase += (uint32_t)(USBx)->BTABLE;
_wRegValPtr = (uint16_t *)(_wRegBase + 0x400U + ((((uint32_t)(ep_num) * 8U) + 4U) * 2U));
return _wRegValPtr;
}
...
HAL_StatusTypeDef USB_ActivateEndpoint(USB_TypeDef *USBx, USB_EPTypeDef *ep)
{
...
int txaddrs[8] = {0};
int rxaddrs[8] = {0};
for (int i = 0; i < 8; ++i) {
txaddrs[i] = *tx_addr_ptr(USBx, i);
rxaddrs[i] = *rx_addr_ptr(USBx, i);
}
This showed me the following values (in the debugger):
txaddrs:
0: 0x58
1: 0xf5c4
2: 0xc1c2
3: 0x100
rxaddrs:
0: 0x18
1: 0xfa9b
2: 0xcb56
3: 0x0
These, unexpectedly, look correct.
0x100 is the txaddr of endpoint 3, even though USB_ActivateEndpoint() has only just been called for the first time.
With a lot of grepping, I found that PCD_SET_EP_TX_ADDRESS (in stm32f3xx_hal_pcd.h) is not only used directly in USB_ActivateEndpoint(), but also in the PCD_SET_EP_DBUF0_ADDR macro from `stm32f3xx_hal_pcd.h.
PCD_SET_EP_DBUF0_ADDR does not appear to be used, so I do not know how the (changed) values from usbd_conf.c:
USBD_StatusTypeDef USBD_LL_Init(USBD_HandleTypeDef *pdev)
{
...
/* USER CODE BEGIN EndPoint_Configuration */
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x00 , PCD_SNG_BUF, 0x18);
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x80 , PCD_SNG_BUF, 0x58);
/* USER CODE END EndPoint_Configuration */
/* USER CODE BEGIN EndPoint_Configuration_HID */
//HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x81 , PCD_SNG_BUF, 0x100);
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x83 , PCD_SNG_BUF, 0x100);
/* USER CODE END EndPoint_Configuration_HID */
get into the memory-mapped USB registers.
I can infer, from the presence of a 0x00 in rxaddr[3] (endpoint 3) that they happen in pairs (as there is no call to HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x3 , PCD_SNG_BUF, 0x0);).
Update 4:
After changing the device to use endpoint 1 again, the value of 0x100 in txaddrs[3] remained. It was simply there from the last run, which removes a little confusion.
Update 5:
It's a BTABLE problem. The BTABLE register has a value of 0x00, putting the btable at the start of the PMA.
The PMA looks like this:
and the start of the PMA is the btable.
I found:
PMAAddr + BASEADDR_BTABLE + 0x00000000 : EP0_TX_ADDR
PMAAddr + BASEADDR_BTABLE + 0x00000002 : EP0_TX_COUNT
PMAAddr + BASEADDR_BTABLE + 0x00000004 : EP0_RX_ADDR
PMAAddr + BASEADDR_BTABLE + 0x00000006 : EP0_RX_COUNT
PMAAddr + BASEADDR_BTABLE + 0x00000008 : EP1_TX_ADDR
PMAAddr + BASEADDR_BTABLE + 0x0000000A : EP1_TX_COUNT
PMAAddr + BASEADDR_BTABLE + 0x0000000C : EP1_RX_ADDR
PMAAddr + BASEADDR_BTABLE + 0x0000000E : EP1_RX_COUNT
PMAAddr + BASEADDR_BTABLE + 0x00000010 : EP2_TX_ADDR
PMAAddr + BASEADDR_BTABLE + 0x00000012 : EP2_TX_COUNT
PMAAddr + BASEADDR_BTABLE + 0x00000014 : EP2_RX_ADDR
PMAAddr + BASEADDR_BTABLE + 0x00000016 : EP2_RX_COUNT
on https://community.st.com/s/question/0D50X00009XkaUASAZ/stm32-usb-endpoint-configuration-clarification-questions
This shows that endpoints 0x81 and 0x82 work because both pma[4] and pma[8] are set to 0x100.
Endpoint 0x83 does not work because pma[12] is set to 0x0.
This is consistent with the corrupted data having the value 58 00 2c 00 - the USB hardware was reading pma[12] and therefore sending the uint16_t's from pma[0], which are 0x0058 0x002c, sent reversed because of little-endianness. (Note: the PMA is only 16-bits wide, so there are only two bytes at each address here.)
The call to HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x82, PCD_SNG_BUF, 0x100); does not set up the btable pointer at pma[12], it just notes that PMA address to copy-to.
Now I just need to find where the content of the btable is being written...
The TX address of EP3 is being overwritten by an incoming USB packet because it is located at the same offset in PMA as the RX buffer for the control EP0. The original code works okay because it only uses EP1.
How exactly these offsets are set depends on what's in the layers of STMCube, and my copy seems to be different, but this appears where the offsets of RX and TX buffers in EP0 are set in the OP's code:
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x00 , PCD_SNG_BUF, 0x18);
HAL_PCDEx_PMAConfig((PCD_HandleTypeDef*)pdev->pData , 0x80 , PCD_SNG_BUF, 0x58);
These constants need to be changed to 0x40 and 0x80 (for example).
In my version, these offsets are defined in a header file and there is also EP_NUM constant, but how it's used is unclear.
Everything else seems to be just distractions.
A work-around is to add the two lines following // correct PMA BTABLE to HAL_PCD_EP_Transmit() in stm32f3xx_hal_pcd.c:
HAL_StatusTypeDef HAL_PCD_EP_Transmit(PCD_HandleTypeDef *hpcd, uint8_t ep_addr, uint8_t *pBuf, uint32_t len)
{
PCD_EPTypeDef *ep;
ep = &hpcd->IN_ep[ep_addr & EP_ADDR_MSK];
/*setup and start the Xfer */
ep->xfer_buff = pBuf;
ep->xfer_len = len;
ep->xfer_count = 0U;
ep->is_in = 1U;
ep->num = ep_addr & EP_ADDR_MSK;
// correct PMA BTABLE
uint32_t *btable = (uint32_t *) USB_PMAADDR;
btable[ep->num * 4] = ep->pmaadress;
...
This causes a correction to the location of endpoint 3's TX buffer before every write. This is wasteful, but it was not sufficient to set it once, as the value in pma[12] was being overwritten.
I have used this workaround in successfully creating a composite CDC (serial) and HID device.
To solve this properly, I need an answer to: What initialises the contents of the STM32's USB BTABLE when the __HAL_RCC_USB_CLK_ENABLE() macro is executed in HAL_PCD_MspInit()?

3des authentication no response

I send the command 1A:00 to the Mifare Ultralight C tag by using APDU command
Here is the log:
inList passive target
write: 4A 1 0
read: 4B 1 1 0 44 0 7 4 C2 35 CA 2C 2C 80
write: 40 1 1A 0
I don't know why when I send 1A 00, it did not respond with RndA?
My code is this:
bool success = nfc.inListPassiveTarget();
if (success) {
uint8_t auth_apdu[] = {
0x1A,
0x00
};
uint8_t response[255];
uint8_t responseLength = 255;
success = nfc.inDataExchange(auth_apdu, sizeof(auth_apdu), response, &responseLength);
if (success) {
Serial.println("\n Successfully sent 1st auth_apdu \n");
Serial.println("\n The response is: \n");
nfc.PrintHexChar(response, responseLength);
}
When I try to read pages with command 0x30, , it works OK, but not the authentication command: 1A:00
I don't know what I am doing wrong here
The answer is that I should use inCommunicateThru ( 0x42 ) instead of inDataExchange ( 0x40 ).
Thus the correct command should be : 0x42 1A 0

formating string to hex and validate

I have an application that accepts hex values from a C++/CLI richtextbox.
The string comes from a user input.
Sample input and expected output.
01 02 03 04 05 06 07 08 09 0A //good input
0102030405060708090A //bad input but can automatically be converted to good by adding spaces.
XX ZZ DD AA OO PP II UU HH SS //bad input this is not hex
01 000 00 00 00 00 00 01 001 0 //bad input hex is only 2 chars
How to write function:
1. Detect if input is good or bad input.
2. If its a bad input check what kind of bad input: no spaces, not hex, must be 2 chars split.
3. If its no spaces bad input then just add the spaces automatically.
So far I made a space checker by searching for a space like:
for ( int i = 2; i < input.size(); i++ )
{
if(inputpkt[i] == ' ')
{
cout << "good input" << endl;
i = i+2;
}
else
{
cout << "bad input. I will format for you" << endl;
}
}
But it doesn't really work as expected because it returns this:
01 000 //bad input
01 000 00 00 00 00 00 01 001 00 //good input
update
1 Check if input is actually hex:
bool ishex(std::string const& s)
{
return s.find_first_not_of("0123456789abcdefABCDEF ", 0) == std::string::npos;
}
Are you operating in C++/CLI, or in plain C++? You've got it tagged C++/CLI, but you're using std::string, not .Net System::String.
I suggest this as a general plan: First, split your large string into smaller ones based on any whitespace. For each individual string, make sure it only contains [0-9a-fA-F], and is a multiple of two characters long.
The implementation could go something like this:
array<Byte>^ ConvertString(String^ input)
{
List<System::Byte>^ output = gcnew List<System::Byte>();
// Splitting on a null string array makes it split on all whitespace.
array<String^>^ words = input->Split(
(array<String^>)nullptr,
StringSplitOptions::RemoveEmptyEntries);
for each(String^ word in words)
{
if(word->Length % 2 == 1) throw gcnew Exception("Invalid input string");
for(int i = 0; i < word->Length; i += 2)
{
output->Add((Byte)(GetHexValue(word[i]) << 4 + GetHexValue(word[i+1])));
}
}
return output->ToArray();
}
int GetHexValue(Char c) // Note: Upper case 'C' makes it System::Char
{
// If not [0-9a-fA-F], throw gcnew Exception("Invalid input string");
// If is [0-9a-fA-F], return integer between 0 and 15.
}

Using IOKit to return Mac's Serial number returns 4 extra characters

I'm playing with IOKit and have the following code, the general idea is to pass a platformExpert key to this small core foundation command line application and have it print the decoded string. The test case is "serial-number". The code below when run like:
./compiled serial-number
Almost works but returns the last 4 characters of the serial number at the beginning of the string i.e. for an example serial such as C12D2JMPDDQX it would return
DDQXC12D2JMPDDQX
Any ideas?
#include <CoreFoundation/CoreFoundation.h>
#include <IOKit/IOKitLib.h>
int main (int argc, const char * argv[]) {
CFStringRef parameter = CFSTR("serial-number");
if (argv[1]) {
parameter = CFStringCreateWithCString(
NULL,
argv[1],
kCFStringEncodingUTF8);
}
CFDataRef data;
io_service_t platformExpert = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("IOPlatformExpertDevice"));
if (platformExpert)
{
data = IORegistryEntryCreateCFProperty(platformExpert,
parameter,
kCFAllocatorDefault, 0);
}
IOObjectRelease(platformExpert);
CFIndex bufferLength = CFDataGetLength(data);
UInt8 *buffer = malloc(bufferLength);
CFDataGetBytes(data, CFRangeMake(0,bufferLength), (UInt8*) buffer);
CFStringRef string = CFStringCreateWithBytes(kCFAllocatorDefault,
buffer,
bufferLength,
kCFStringEncodingUTF8,
TRUE);
CFShow(string);
return 0;
}
A more simplified solution:
#include <CoreFoundation/CoreFoundation.h>
#include <IOKit/IOKitLib.h>
int main()
{
CFMutableDictionaryRef matching = IOServiceMatching("IOPlatformExpertDevice");
io_service_t service = IOServiceGetMatchingService(kIOMasterPortDefault, matching);
CFStringRef serialNumber = IORegistryEntryCreateCFProperty(service,
CFSTR("IOPlatformSerialNumber"), kCFAllocatorDefault, 0);
const char* str = CFStringGetCStringPtr(serialNumber,kCFStringEncodingMacRoman);
printf("%s\n", str); //->stdout
//CFShow(serialNumber); //->stderr
IOObjectRelease(service);
return 0;
}
compile with:
clang -framework IOKit -framework ApplicationServices cpuid.c -o cpuid
Fork from github if you like ;)
https://github.com/0infinity/IOPlatformSerialNumber
You may be misinterpreting the value of the serial-number parameter. If I use ioreg -f -k serial-number, I get this:
| "serial-number" =
| 00000000: 55 51 32 00 00 00 00 00 00 00 00 00 00 XX XX XX XX UQ2..........XXXX
| 00000011: XX XX XX XX 55 51 32 00 00 00 00 00 00 00 00 00 00 XXXXUQ2..........
| 00000022: 00 00 00 00 00 00 00 00 00 .........
(I've X'd out my Mac's serial number except for the repeated part.)
You don't see the null characters when you show the string because, well, they're null characters. I don't know why it has what seems like multiple fields separated by null characters, but that's what it seems to be.
I recommend doing further investigation to make sure there isn't a specification for how this data is supposed to be interpreted; if you don't find anything, I'd skip through the first run of nulls and get everything after that up to the next run of nulls.