In my IPhone App Development, I'm using a timestamp value for an order id value.
I want to format the timestamp value in such a way that it contains only decimal value.
Like
timestamp value= 343434234.78900633
Now I want to format that time stamp value, so that it returns the decimal value 78900633.
You need to use the modf function that breaks a double/float into an int an a fraction.
double intpart;
double param = 343434234.78900633;
double fractpart = modf (param , &intpart);
printf ("%lf = %lf + %lf \n", param, intpart, fractpart);
char buf[32];
sprintf(buf, "%f", fractpart);
int fpart = atoi(buf+2);
printf("Fractional as int = %d\n", fpart);
Related
I'm working on a project implementing a side channel timing attack in C on HMAC. I've done so by computing the hex encoded tag and brute forcing byte-by-byte by taking advantage of strcmp's timing optimization. So for every digit in my test tag, I calculate the amount of time it takes for every hex char to verify. I take the hex char that corresponds to the highest amount of time calculated and infer that it is the correct char in the tag and move on to the next byte. However, strcmp's timing is very unpredictable. Although it is easy to see the timing differences between comparing two equal strings and two totally different strings, I'm having difficulty finding the char that takes my test string the most time to compute when every other string I'm comparing to is very similar (only differing by 1 byte).
The changeByte method below takes in customTag, which is the tag that has been computed up to that point in time and attempts to find the correct byte corresponding to index. changeByte is called n time where n=length of the tag. hexTag is a global variable that is the correct tag. timeCompleted stores the average time taken to compute the testTag at each of the hex characters for a char position. Any help would be appreciated, thank you for your time.
// Checks if the index of the given byte is correct or not
void changeByte(unsigned char *k, unsigned char * m, unsigned char * algorithm, unsigned char * customTag, int index)
{
long iterations=50000;
// used for every byte sequence to test the timing
unsigned char * tempTag = (unsigned char *)(malloc(sizeof (unsigned char)*(strlen(customTag)+1 ) ));
sprintf(tempTag, "%s", customTag);
int timeIndex=0;
// stores the time completed for every respective ascii char
double * timeCompleted = (double *)(malloc (sizeof (double) * 16));
// iterates through hex char 0-9, a-f
for (int i=48; i<=102;i++){
if (i >= 58 && i <=96)continue;
double total=0;
for (long j=0; j<iterations; j++){
// calculates the time it takes to complete for every char in that position
tempTag[index]=(unsigned char)i;
struct rusage usage;
struct timeval start, end;
getrusage(RUSAGE_SELF, &usage);
start=usage.ru_stime;
for (int k=0; k<50000; k++)externalStrcmp(tempTag, hexTag); // this is just calling strcmp in another file
getrusage (RUSAGE_SELF, &usage);
end=usage.ru_stime;
}
double startTime=((double)start.tv_sec + (double)start.tv_usec)/10000;
double endTime=((double)end.tv_sec+(double)end.tv_usec)/10000;
total+=endTime-startTime;
}
double val=total/iterations;
timeCompleted[timeIndex]=val;
timeIndex++;
}
// sets next char equal to the hex char corresponding to the index
customTag[index]=getCorrectChar (timeCompleted);
free(timeCompleted);
free(tempTag);
}
// finds the highest time. The hex char corresponding with the highest time it took the
// verify function to complete is the correct one
unsigned char getCorrectChar(double * timeCompleted)
{
double high =-1;
int index=0;
for (int i=0; i<16; i++){
if (timeCompleted[i]>high){
high=timeCompleted[i];
index=i;
}
}
return (index+48)<=57 ?(unsigned char) (index+48) : (unsigned char)(index+87);
}
I'm not sure if it's the main problem, but you add seconds to microseconds directly as though 1us == 1s. It will give wrong results when number of seconds in startTime and endTime differs.
And the scaling factor between usec and sec is 1 000 000 (thx zaph). So that should work better:
double startTime=(double)start.tv_sec + (double)start.tv_usec/1000000;
double endTime=(double)end.tv_sec + (double)end.tv_usec/1000000;
I need to do this in two separate steps but so far I am not finding a way of doing this.
First, I need to convert a double variable, into a char variable (and to be saved in that variable). I have noticed type casting doesnt work the same in C as Java / other languages. How do I cast a variable to be a string / char?
Second, I need to concatenate the strings, there will be a total of 6 string variables that will need concatenating, I have only found the strcat function which only takes 2 arguments.
These are the strings I am trying to build:
char *queryOne = "INSERT INTO location (id, carid, ownerid, lat, long, speed) VALUES (,2, 1, ";
char *queryTwo = lat; // lat is a double
char *queryThree = ",";
char *queryFour = longatude; // longatude is a double
char *queryFive = ",";
char *querySix = speed; // speed is a double
And then I need the concatenated string to work in: (mysql_query(conn, query)) as one long string
Edit: So possibly, this should convert the datatype I think?
char buffer [50];
char *queryOne = "INSERT INTO location (id, carid, ownerid, lat, long, speed) VALUES (,2, 1, ";
char *queryTwo = sprintf (buffer, "%d", lat);
char *queryThree = ",";
char *queryFour = sprintf (buffer, "%d", longatude);
char *queryFive = ",";
char *querySix = sprintf (buffer, "%d", speed);
fprintf(stderr, "Dta: %s\n", queryOne);
fprintf(stderr, "Dta: %s\n", *queryTwo);
fprintf(stderr, "Dta: %s\n", queryThree);
fprintf(stderr, "Dta: %s\n", *queryFour);
fprintf(stderr, "Dta: %s\n", queryFive);
fprintf(stderr, "Dta: %s\n", *querySix);
In your case, you could use:
#define MAXSQL 256
char sql[MAXSQL];
snprintf(sql, MAXSQL, "%s %f , %f , %f", queryOne, lat, longatude, speed);
The snprintf function writes onto the buffer, that is its first argument. http://www.cplusplus.com/reference/cstdio/snprintf/?kw=snprintf
Now you can use the sql string as you please.
Note that I used snprintf rather than sprintf. This is to avoid potential buffer overflows.
Also, don't use strcat so repeatedly, because that causes a Shlemiel the Painter algorithm, and every next call to strcat gets slower, because strcat has to start from the beginning and find the null terminator. See http://www.joelonsoftware.com/articles/fog0000000319.html for more info.
I currently have code in objective C that can pull out an integer's most significant digit value. My only question is if there is a better way to do it than with how I have provided below. It gets the job done, but it just feels like a cheap hack.
What the code does is that it takes a number passed in and loops through until that number has been successfully divided to a certain value. The reason I am doing this is for an educational app that splits a number up by it's value and shows the values added all together to produce the final output (1234 = 1000 + 200 + 30 + 4).
int test = 1;
int result = 0;
int value = 0;
do {
value = input / test;
result = test;
test = [[NSString stringWithFormat:#"%d0",test] intValue];
} while (value >= 10);
Any advice is always greatly appreciated.
Will this do the trick?
int sigDigit(int input)
{
int digits = (int) log10(input);
return input / pow(10, digits);
}
Basically it does the following:
Finds out the number of digits in input (log10(input)) and storing it in 'digits'.
divides input by 10 ^ digits.
You should now have the most significant number in digits.
EDIT: in case you need a function that get the integer value at a specific index, check this function out:
int digitAtIndex(int input, int index)
{
int trimmedLower = input / (pow(10, index)); // trim the lower half of the input
int trimmedUpper = trimmedLower % 10; // trim the upper half of the input
return trimmedUpper;
}
How can I convert a float to int while rounding up to the next integer? For example, 1.00001 would go to 2 and 1.9999 would go to 2.
float myFloat = 3.333
// for nearest integer rounded up (3.333 -> 4):
int result = (int)ceilf(myFloat );
// for nearest integer (3.4999 -> 3, 3.5 -> 4):
int result = (int)roundf(myFloat );
// for nearest integer rounded down (3.999 -> 3):
int result = (int)floor(myFloat);
// For just an integer value (for which you don't care about accuracy)
int result = (int)myFloat;
Use ceil function:
int intValue = (int)ceil(yourValue);
You can use following C methods to get the int values from different dataTypes.
extern float ceilf(float);
extern double ceil(double);
extern long double ceill(long double);
These functions return float, double and long double respectively. But the job of these function is to get ceil of or floor of the argument. As in http://en.wikipedia.org/wiki/Floor_and_ceiling_functions
Then you can cast the return value to desired type like.
int intVariable = (int)ceilf(floatValueToConvert);
Hope it is helpful.
if we have float value like 13.123 to convert it into integer like 13
Code
float floatnumber=13.123; //you can also use CGFloat instead of float
NSLog(#"%.f",floatnumber); // use for print in command prompt
How do I convert an int to a char and also back from char to int?
e.g 12345 == abcde
Right now I have it using a whole bunch of case statement, wonder if there is a smarter way of doing that?
Thanks,
Tee
I would recommend use ASCII values and just typecast.
In most cases it is best to just use the ASCII values to encode letters; however if you wanted to use 1 2 3 4 to represent 'a' 'b' 'c' 'd' then you could use the following.
For example, if you wanted to convert the letter 1 to 'a' you could do:
char letter = (char) 1 + 96;
as in ASCII 97 corresponds to the character 'a'. Likewise you can convert the character 'a' to the integer 1 as follows
int num = (int) 'a' - 96;
Of course it is just easier to use ASCII values to start with and avoid adding or subtracting as shown above. :-D
If you want just to map 'a' -> 1, 'b' -> 2, ..., 'i' -> 9, you should do simply the following:
int convert(char* s)
{
if (!s) return -1; // error
int result = 0;
while (*s)
{
int digit = *s - 'a' + 1;
if (digit < 1 || digit > 9)
return -1; // error
result = 10 * result + digit;
}
return result;
}
However, you should still care about 0s (which letter do you want to map to 0?) and overflow (my code doesn't check for it).
unsigned char encryPt[1];
encryPt[0] = (char)1;