If have an array of char's pulled out of an NSData object with getBytes:range:
I want to test if a particular bit is set. I would assume I would do it with a bitwise AND but it doesn't seem to be working for me.
I have the following:
unsigned char firstBytes[3];
[data getBytes:&firstBytes range:range];
int bitIsSet = firstBytes[0] & 00100000;
if (bitIsSet) {
// Do Something
}
The value of firstBytes[0] is 48 (or '0' as an ASCII character). However bitIsSet always seems to be 0. I would imagine I am just doing something silly here, I am new to working on a bit level so maybe my logic is wrong.
If you put a 0 before a number you are saying it's expressed in octal representation.
00100000 actually means 32768 in decimal representation, 10000000 00000000 in binary representation.
Try
int bitIsSet = firstBytes[0] & 32;
or
int bitIsSet = firstBytes[0] & 0x20;
Related
Why do we get this value as output:- ffffffff
struct bitfield {
signed char bitflag:1;
};
int main()
{
unsigned char i = 1;
struct bitfield *var = (struct bitfield*)&i;
printf("\n %x \n", var->bitflag);
return 0;
}
I know that in a memory block of size equal to the data-type, the first bit is used to represent if it is positive(0) or negative(1); when interpreted as a signed data-type. But, still can't figure out why -1 (ffffffff) is printed. When the struct with only one bit set, I was expecting that when it gets promoted to a 1 byte char. Because, my machine is a little-endian and I was expecting that one bit in the field to be interpreted as the LSb in my 1 byte character.
Can somehow please explain. I'm really confused.
I'm working on a project implementing a side channel timing attack in C on HMAC. I've done so by computing the hex encoded tag and brute forcing byte-by-byte by taking advantage of strcmp's timing optimization. So for every digit in my test tag, I calculate the amount of time it takes for every hex char to verify. I take the hex char that corresponds to the highest amount of time calculated and infer that it is the correct char in the tag and move on to the next byte. However, strcmp's timing is very unpredictable. Although it is easy to see the timing differences between comparing two equal strings and two totally different strings, I'm having difficulty finding the char that takes my test string the most time to compute when every other string I'm comparing to is very similar (only differing by 1 byte).
The changeByte method below takes in customTag, which is the tag that has been computed up to that point in time and attempts to find the correct byte corresponding to index. changeByte is called n time where n=length of the tag. hexTag is a global variable that is the correct tag. timeCompleted stores the average time taken to compute the testTag at each of the hex characters for a char position. Any help would be appreciated, thank you for your time.
// Checks if the index of the given byte is correct or not
void changeByte(unsigned char *k, unsigned char * m, unsigned char * algorithm, unsigned char * customTag, int index)
{
long iterations=50000;
// used for every byte sequence to test the timing
unsigned char * tempTag = (unsigned char *)(malloc(sizeof (unsigned char)*(strlen(customTag)+1 ) ));
sprintf(tempTag, "%s", customTag);
int timeIndex=0;
// stores the time completed for every respective ascii char
double * timeCompleted = (double *)(malloc (sizeof (double) * 16));
// iterates through hex char 0-9, a-f
for (int i=48; i<=102;i++){
if (i >= 58 && i <=96)continue;
double total=0;
for (long j=0; j<iterations; j++){
// calculates the time it takes to complete for every char in that position
tempTag[index]=(unsigned char)i;
struct rusage usage;
struct timeval start, end;
getrusage(RUSAGE_SELF, &usage);
start=usage.ru_stime;
for (int k=0; k<50000; k++)externalStrcmp(tempTag, hexTag); // this is just calling strcmp in another file
getrusage (RUSAGE_SELF, &usage);
end=usage.ru_stime;
}
double startTime=((double)start.tv_sec + (double)start.tv_usec)/10000;
double endTime=((double)end.tv_sec+(double)end.tv_usec)/10000;
total+=endTime-startTime;
}
double val=total/iterations;
timeCompleted[timeIndex]=val;
timeIndex++;
}
// sets next char equal to the hex char corresponding to the index
customTag[index]=getCorrectChar (timeCompleted);
free(timeCompleted);
free(tempTag);
}
// finds the highest time. The hex char corresponding with the highest time it took the
// verify function to complete is the correct one
unsigned char getCorrectChar(double * timeCompleted)
{
double high =-1;
int index=0;
for (int i=0; i<16; i++){
if (timeCompleted[i]>high){
high=timeCompleted[i];
index=i;
}
}
return (index+48)<=57 ?(unsigned char) (index+48) : (unsigned char)(index+87);
}
I'm not sure if it's the main problem, but you add seconds to microseconds directly as though 1us == 1s. It will give wrong results when number of seconds in startTime and endTime differs.
And the scaling factor between usec and sec is 1 000 000 (thx zaph). So that should work better:
double startTime=(double)start.tv_sec + (double)start.tv_usec/1000000;
double endTime=(double)end.tv_sec + (double)end.tv_usec/1000000;
I am having an issue getting the correct format of a char array in Objective C
Correct sample:
unsigned char bytes[] = {2, 49, 53, 49, 3, 54};
When printing to the debug area I get this:
Printing description of bytes:
(unsigned char [6]) bytes = "\x02151\x0365"
Incorrect sample:
I then attempt to populate an unsigned char array with characters manually (via a for-loop that produces the below samples):
unsigned char bb[64];
bb[0] = 2;
bb[1] = 49;
bb[2] = 52;
bb[3] = 49;
bb[4] = 3;
bb[5] = 54;
When printing to the debug area I get this:
Printing description of bb: (unsigned char [64]) bb = "\x02151\x036";
Also when expanding the array while debugging I can see xcode is telling me that the 'bytes' array has int values and the 'bb' array has characters such as '\x02' in it.
This is just a high level piece of code that does not do much yet, but I need to match the array named 'bytes' before being able to proceed.
Any ideas? Thanks
You don't:
state what kind (local, instance, etc.) of variables bytes and bb are and that makes a difference;
show your for loop; or
state what you mean by "printing"
so this answer is going to be a guess!
Try the following code (it's a "complete" Cocoa app):
#implementation AppDelegate
unsigned char bytes[] = {2, 49, 53, 49, 3, 54};
char randomBytes[] = { 0x35, 0x0 };
unsigned char bb[64];
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
for(int ix = 0; ix < 6; ix++) bb[ix] = bytes[ix];
// breakpoint here
}
#end
Now in the debugger, at least on my machine/compiler combination, this result is not guaranteed:
(lldb) p bytes
(unsigned char [6]) $0 = "\x02151\x0365"
(lldb) p bb
(unsigned char [64]) $1 = "\x02151\x036"
I think this reproduces your result. So what is going on?
The variable bytes is an array but as it is characters the debugger is choosing to interpret it as a C string when displaying it - note the double quotes around the value and the \x hex escapes for non-printable characters.
A C string terminates on a null (zero) byte so when the debugger interprets your array as a C string it will display characters until it finds a null byte and stops. It just so happens that on your machine the two bytes following your bytes array have the values 0x35 and 0x0; I have reproduced that here by adding the randomBytes array; and those values are the character 5 and the null byte so the debugger prints the 5.
So why does bb only print 6 characters? Global variables are zero initialised, so bb has 64 null bytes before the for loop. After the loop the 7th of those null bytes acts as the EOS (end of string) marker and the print just shows the 6 characters you expect.
Finally why do I say the above results are not guaranteed? The memory layout order of global variables is not specified as part of the C Standard, which underlies Objective-C, so there is in fact no guarantee that the randomBytes array immediately follows the bytes array. If the global variable layout algorithm is different on your computer/compiler combination you may not get the same results. However the underlying cause of your issue is the same - the "printing" is running off the end of your bytes array.
HTH
Basic bit manipulation question. How can I declare a uint8_t bitmap value in Objective-C?
e.g. value: "00000001"
Is it as simple as:
uint8_t value = 00000001
or does it need to have some hexadecimal prefix?
uint8_t valuePrefix = 0x00000001
When you say "bitmap", I assume you're talking about binary representations. If you're specifying a binary number, you use the 0b prefix:
uint8_t value = 0b00000100; // 4
Or, if there's only one bit on, we often use bitwise shift operator:
uint8_t value = 1 << 2; // 4
I am willing to transfer data from unsigned char hash[512 + 1] to char res[512 + 1] safely.
My C hashing library MHASH returns a result so it can be printed as listed below.
for (int i = 0; i < size /*hash block size*/; i++)
printf("%.2x", hash[i]); // which is unsigned char - it prints normal hash characters in range [a-z,0-9]
printf("\n");
I am willing to do something like that (see below).
const char* res = (const char *)hash; // "hash" to "res"
printf("%s\n", res); // print "res" (which is const char*) - if i do this, unknown characters are printed
I know the difference between char and unsigned char, but I don't know how to transfer data. Any answer would be greatly appreciated, thanks in advance. But please do not recommend me C++ (STD) code, I am working on a project that is not STD-linked.
Given that the contents of the unsigned char array are printable characters, you can always safely convert it to char. Either a hardcopy with memcpy or a pointer reference as in the code you have already written.
I'm guessing that the actual problem here is that the unsigned char array contents are not actually printable characters, but integers in some format. You'll have to convert them from integer to ASCII letters. How to do this depends on the format of the data, which isn't clear in your question.
Assuming the following:
#define ARR_SIZE (512 + 1)
unsigned char hash[ARR_SIZE];
char res[ARR_SIZE];
/* filling up hash here. */
Just do:
#include <string.h>
...
memcpy(res, hash, ARR_SIZE);
Well, thank you guys for your answers, but unfortunately nothing worked yet. I am now sticking with the code below.
char res[(sizeof(hash) * 2) + 1] = { '\0' };
char * pPtr = res;
for (int i = 0; i < hashBlockSize; i++)
sprintf(pPtr + (i * 2), "%.2x", hash[i]);
return (const char *)pPtr;
Until there is any other much more performant way to get this done. It's right, my question is strongly related to MHASH Library.