formating string to hex and validate - c++-cli

I have an application that accepts hex values from a C++/CLI richtextbox.
The string comes from a user input.
Sample input and expected output.
01 02 03 04 05 06 07 08 09 0A //good input
0102030405060708090A //bad input but can automatically be converted to good by adding spaces.
XX ZZ DD AA OO PP II UU HH SS //bad input this is not hex
01 000 00 00 00 00 00 01 001 0 //bad input hex is only 2 chars
How to write function:
1. Detect if input is good or bad input.
2. If its a bad input check what kind of bad input: no spaces, not hex, must be 2 chars split.
3. If its no spaces bad input then just add the spaces automatically.
So far I made a space checker by searching for a space like:
for ( int i = 2; i < input.size(); i++ )
{
if(inputpkt[i] == ' ')
{
cout << "good input" << endl;
i = i+2;
}
else
{
cout << "bad input. I will format for you" << endl;
}
}
But it doesn't really work as expected because it returns this:
01 000 //bad input
01 000 00 00 00 00 00 01 001 00 //good input
update
1 Check if input is actually hex:
bool ishex(std::string const& s)
{
return s.find_first_not_of("0123456789abcdefABCDEF ", 0) == std::string::npos;
}

Are you operating in C++/CLI, or in plain C++? You've got it tagged C++/CLI, but you're using std::string, not .Net System::String.
I suggest this as a general plan: First, split your large string into smaller ones based on any whitespace. For each individual string, make sure it only contains [0-9a-fA-F], and is a multiple of two characters long.
The implementation could go something like this:
array<Byte>^ ConvertString(String^ input)
{
List<System::Byte>^ output = gcnew List<System::Byte>();
// Splitting on a null string array makes it split on all whitespace.
array<String^>^ words = input->Split(
(array<String^>)nullptr,
StringSplitOptions::RemoveEmptyEntries);
for each(String^ word in words)
{
if(word->Length % 2 == 1) throw gcnew Exception("Invalid input string");
for(int i = 0; i < word->Length; i += 2)
{
output->Add((Byte)(GetHexValue(word[i]) << 4 + GetHexValue(word[i+1])));
}
}
return output->ToArray();
}
int GetHexValue(Char c) // Note: Upper case 'C' makes it System::Char
{
// If not [0-9a-fA-F], throw gcnew Exception("Invalid input string");
// If is [0-9a-fA-F], return integer between 0 and 15.
}

Related

How to inflate a gzip with a really old zlib?

I'm using a PPC platform that has an older version of zlib ported to it. Is it possible to use zlib 1.1.3 to inflate an archive made with gzip 1.5?
$ gzip --list --verbose vmlinux.z
method crc date time compressed uncompressed ratio uncompressed_name
defla 12169518 Apr 29 13:00 4261643 9199404 53.7% vmlinux
The first 32 bytes of the archive are
00000000 1f 8b 08 08 29 f4 8a 60 00 03 76 6d 6c 69 6e 75 |....)..`..vmlinu|
00000010 78 00 ec 9a 7f 54 1c 55 96 c7 6f 75 37 d0 fc 70 |x....T.U..ou7..p|
I've tried using this code (where source is a pointer to the first byte at 1f 8b) with the three options A, B, and C for the WBIT initialization.
int ZEXPORT gunzip (dest, destLen, source, sourceLen)
Bytef *dest;
uLongf *destLen;
const Bytef *source;
uLong sourceLen;
{
z_stream stream;
int err;
stream.next_in = (Bytef*)source;
stream.avail_in = (uInt)sourceLen;
/* Check for source > 64K on 16-bit machine: */
if ((uLong)stream.avail_in != sourceLen) return Z_BUF_ERROR;
stream.next_out = dest;
stream.avail_out = (uInt)*destLen;
if ((uLong)stream.avail_out != *destLen) return Z_BUF_ERROR;
stream.zalloc = (alloc_func)my_alloc;
stream.zfree = (free_func)my_free;
/* option A */
err = inflateInit(&stream);
/* option B */
err = inflateInit2(&stream, 15 + 16);
/* option C */
err = inflateInit2(&stream, -MAX_WBITS);
if (err != Z_OK) return err;
err = inflate(&stream, Z_FINISH);
if (err != Z_STREAM_END) {
inflateEnd(&stream);
return err == Z_OK ? Z_BUF_ERROR : err;
}
*destLen = stream.total_out;
err = inflateEnd(&stream);
return err;
}
Option A:
zlib inflate() fails with error Z_DATA_ERROR. "unknown compression method"
z_stream.avail_in = 4261640
z_stream.total_in = 1
z_stream.avail_out = 134152192
z_stream.total_out = 0
Option B:
zlib inflateInit2_() fails at line 118 with a Z_STREAM_ERROR.
/* set window size */
if (w < 8 || w > 15)
{
inflateEnd(z);
return Z_STREAM_ERROR;
}
Option C:
zlib inflate() fails with error Z_DATA_ERROR. "invalid block type"
z_stream.avail_in = 4261640
z_stream.total_in = 1
z_stream.avail_out = 134152192
z_stream.total_out = 0
Your option B would work for zlib 1.2.1 or later.
With zlib 1.1.3, there are two ways.
Use the gzopen(), gzread(), and gzclose() to read the gzip stream from a file and decompress into memory.
To decompress from the gzip stream in memory, use your option C, raw inflate, after manually decoding the gzip header. Use crc32() to calculate the CRC-32 of the decompressed data as you inflate it. When the inflation completes, manually decode the gzip trailer, checking the CRC-32 and size of the decompressed data.
Manual decoding of the gzip header and trailer is simple to implement. See RFC 1952 for the description of the header and trailer.

Converting String using specific encoding to get just one character

I'm on this frustrating journey trying to get a specific character from a Swift string. I have an Objective-C function, something like
- ( NSString * ) doIt: ( char ) c
that I want to call from Swift.
This c is eventually passed to a C function in the back that does the weightlifting here but this function gets tripped over when c is or A0.
Now I have two questions (apologies SO).
I am trying to use different encodings, especially the ASCII variants, hoping one would convert (A0) to spcae (20 or dec 32). The verdict seems to be that I need to hardcode this but if there is a failsafe, non-hardcoded way I'd like to hear about it!
I am really struggling with the conversion itself. How do I access a specific character using a specific encoding in Swift?
a) I can use
s.utf8CString[ i ]
but then I am bound to UTF8.
b) I can use something like
let s = "\u{a0}"
let p = UnsafeMutablePointer < CChar >.allocate ( capacity : n )
defer
{
p.deallocate()
}
// Convert to ASCII
NSString ( string : s ).getCString ( p,
maxLength : n,
encoding : CFStringConvertEncodingToNSStringEncoding ( CFStringBuiltInEncodings.ASCII.rawValue ) )
// Hope for 32
let c = p[ i ]
but this seems overkill. The string is converted to NSString to apply the encoding and I need to allocate a pointer, all just to get a single character.
c) Here it seems Swift String's withCString is the man for the job, but I can not even get it to compile. Below is what Xcode's completion gives but even after fiddling with it for a long time I am still stuck.
// How do I use this
// ??
s.withCString ( encodedAs : _UnicodeEncoding.Protocol ) { ( UnsafePointer < FixedWidthInteger & UnsignedInteger > ) -> Result in
// ??
}
TIA
There are two withCString() methods: withCString(_:) calls the given closure with a pointer to the contents of the string, represented as a null-terminated sequence of UTF-8 code units. Example:
// An emulation of your Objective-C method.
func doit(_ c: CChar) {
print(c, terminator: " ")
}
let s = "a\u{A0}b"
s.withCString { ptr in
var p = ptr
while p.pointee != 0 {
doit(p.pointee)
p += 1
}
}
print()
// Output: 97 -62 -96 98
Here -62 -96 is the signed character representation of the UTF-8 sequence C2 A0 of the NO-BREAK SPACE character U+00A0.
If you just want to iterate over all UTF-8 characters of the string sequentially then you can simply use the .utf8 view. The (unsigned) UInt8 bytes must be converted to the corresponding (signed) CChar:
let s = "a\u{A0}b"
for c in s.utf8 {
doit(CChar(bitPattern: c))
}
print()
I am not aware of a method which transforms U+00A0 to a “normal” space character, so you have to do that manually. With
let s = "a\u{A0}b".replacingOccurrences(of: "\u{A0}", with: " ")
the output of the above program would be 97 32 98.
The withCString(encodedAs:_:) method calls the given closure with a pointer to the contents of the string, represented as a null-terminated sequence of code units. Example:
let s = "a\u{A0}b€"
s.withCString(encodedAs: UTF16.self) { ptr in
var p = ptr
while p.pointee != 0 {
print(p.pointee, terminator: " ")
p += 1
}
}
print()
// Output: 97 160 98 8364
This method is probably of limited use for your purpose because it can only be used with UTF8, UTF16 and UTF32.
For other encodings you can use the data(using:) method. It produces a Data value which is a sequence of UInt8 (an unsigned type). As above, these must be converted to the corresponding signed character:
let s = "a\u{A0}b"
if let data = s.data(using: .isoLatin1) {
data.forEach {
doit(CChar(bitPattern: $0))
}
}
print()
// Output: 97 -96 98
Of course this may fail if the string is not representable in the given encoding.

wolftpm all time recieving TPM_RC_BAD_TAG

Trying to use infenion slb9670 with wolftpm. When porting lib to custom spi functions and os recieving 0x01e(dec 30) which means TPM_RC_BAD_TAG. Is my Spi connection correct if I have already received caps?
(Same code works fine on STM32f7 board with STM HAL spi implementation)
Thanks
rc = TPM2_Init(&tpm2Ctx, TPM2_IoCb, userCtx);
if (rc != 0)
{
tst_printf("\r\nTPM init failed! rc = %i;", rc);
break;
}
else
{
tst_printf("\r\nTPM init success!");
tst_printf("\r\nTPM2: Caps 0x%08x, Did 0x%04x, Vid 0x%04x, Rid 0x%2x \n",
tpm2Ctx.caps,
tpm2Ctx.did_vid >> 16,
tpm2Ctx.did_vid & 0xFFFF,
tpm2Ctx.rid);
}
/* define the default session auth */
XMEMSET(tpm_session, 0, sizeof(tpm_session));
tpm_session[0].sessionHandle = TPM_RS_PW;
TPM2_SetSessionAuth(tpm_session);
if (rc != TPM_RC_SUCCESS &&
rc != TPM_RC_INITIALIZE /* TPM_RC_INITIALIZE = Already started */ ) {
tst_printf("TPM2_SetSessionAuth failed 0x%x: %s\n", rc, TPM2_GetRCString(rc));
break;
}
Startup_In startup;
XMEMSET(&startup, 0, sizeof(Startup_In));
startup.startupType = TPM_SU_STATE;
rc = TPM2_Startup(&startup);
if (rc != TPM_RC_SUCCESS &&
rc != TPM_RC_INITIALIZE /* TPM_RC_INITIALIZE = Already started */ ) {
tst_printf("TPM2_Startup failed %i: %s\n", rc, TPM2_GetRCString(rc));
//break;
}
tst_printf("\r\nTPM2_Startup pass!\f");
Output:
TPM init success!
TPM2: Caps 0x30000697, Did 0x001b, Vid 0x15d1, Rid 0x10
TPM2_Startup failed 30: Unknown
edited
Values of cmd in TPM2_TIS_SendCommand:
80 01 00 00 00 0c 00 00 01 44 00 00 (working example)
00 00 00 00 00 0c 00 00 01 44 00 00 (my case)
80 01 - TPM_ST_NO_SESSIONS which has to be added by TPM2_Packet_Finalize!
The mistake was in functions which are preparing packet. My version of IAR compiler cannot handle __REV() for 16 bit values. I used small macro for handling, now everything works fine.

3des authentication no response

I send the command 1A:00 to the Mifare Ultralight C tag by using APDU command
Here is the log:
inList passive target
write: 4A 1 0
read: 4B 1 1 0 44 0 7 4 C2 35 CA 2C 2C 80
write: 40 1 1A 0
I don't know why when I send 1A 00, it did not respond with RndA?
My code is this:
bool success = nfc.inListPassiveTarget();
if (success) {
uint8_t auth_apdu[] = {
0x1A,
0x00
};
uint8_t response[255];
uint8_t responseLength = 255;
success = nfc.inDataExchange(auth_apdu, sizeof(auth_apdu), response, &responseLength);
if (success) {
Serial.println("\n Successfully sent 1st auth_apdu \n");
Serial.println("\n The response is: \n");
nfc.PrintHexChar(response, responseLength);
}
When I try to read pages with command 0x30, , it works OK, but not the authentication command: 1A:00
I don't know what I am doing wrong here
The answer is that I should use inCommunicateThru ( 0x42 ) instead of inDataExchange ( 0x40 ).
Thus the correct command should be : 0x42 1A 0

Using IOKit to return Mac's Serial number returns 4 extra characters

I'm playing with IOKit and have the following code, the general idea is to pass a platformExpert key to this small core foundation command line application and have it print the decoded string. The test case is "serial-number". The code below when run like:
./compiled serial-number
Almost works but returns the last 4 characters of the serial number at the beginning of the string i.e. for an example serial such as C12D2JMPDDQX it would return
DDQXC12D2JMPDDQX
Any ideas?
#include <CoreFoundation/CoreFoundation.h>
#include <IOKit/IOKitLib.h>
int main (int argc, const char * argv[]) {
CFStringRef parameter = CFSTR("serial-number");
if (argv[1]) {
parameter = CFStringCreateWithCString(
NULL,
argv[1],
kCFStringEncodingUTF8);
}
CFDataRef data;
io_service_t platformExpert = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("IOPlatformExpertDevice"));
if (platformExpert)
{
data = IORegistryEntryCreateCFProperty(platformExpert,
parameter,
kCFAllocatorDefault, 0);
}
IOObjectRelease(platformExpert);
CFIndex bufferLength = CFDataGetLength(data);
UInt8 *buffer = malloc(bufferLength);
CFDataGetBytes(data, CFRangeMake(0,bufferLength), (UInt8*) buffer);
CFStringRef string = CFStringCreateWithBytes(kCFAllocatorDefault,
buffer,
bufferLength,
kCFStringEncodingUTF8,
TRUE);
CFShow(string);
return 0;
}
A more simplified solution:
#include <CoreFoundation/CoreFoundation.h>
#include <IOKit/IOKitLib.h>
int main()
{
CFMutableDictionaryRef matching = IOServiceMatching("IOPlatformExpertDevice");
io_service_t service = IOServiceGetMatchingService(kIOMasterPortDefault, matching);
CFStringRef serialNumber = IORegistryEntryCreateCFProperty(service,
CFSTR("IOPlatformSerialNumber"), kCFAllocatorDefault, 0);
const char* str = CFStringGetCStringPtr(serialNumber,kCFStringEncodingMacRoman);
printf("%s\n", str); //->stdout
//CFShow(serialNumber); //->stderr
IOObjectRelease(service);
return 0;
}
compile with:
clang -framework IOKit -framework ApplicationServices cpuid.c -o cpuid
Fork from github if you like ;)
https://github.com/0infinity/IOPlatformSerialNumber
You may be misinterpreting the value of the serial-number parameter. If I use ioreg -f -k serial-number, I get this:
| "serial-number" =
| 00000000: 55 51 32 00 00 00 00 00 00 00 00 00 00 XX XX XX XX UQ2..........XXXX
| 00000011: XX XX XX XX 55 51 32 00 00 00 00 00 00 00 00 00 00 XXXXUQ2..........
| 00000022: 00 00 00 00 00 00 00 00 00 .........
(I've X'd out my Mac's serial number except for the repeated part.)
You don't see the null characters when you show the string because, well, they're null characters. I don't know why it has what seems like multiple fields separated by null characters, but that's what it seems to be.
I recommend doing further investigation to make sure there isn't a specification for how this data is supposed to be interpreted; if you don't find anything, I'd skip through the first run of nulls and get everything after that up to the next run of nulls.