I have this code:
unsigned int k=(len - sizeof(MSG_INFO));
NSLog(#"%d",k);
for( unsigned int ix = 0; ix < k; ix++)
{
m_pOutPacket->m_buffer[ix] = (char)(pbuf[ix + sizeof(MSG_INFO)]);
}
The problem is, when:
len = 0 and sizeof(MSG_INFO)=68;
k=-68;
This condition gets into the for loop and is continuing for infinite times.
Your code says: unsigned int k. So k isn't -68, it's unsigned. This makes k a very big number, based around a 4 byte int, it would be 4294967210. This is obviously quite a lot more than 0, so it's going to take your for loop a while to get that high, although it would terminate eventually.
The reason you think that it's -86, is that when you print it out with a function like NSLog, it has no direct knowledge about the arguments passed in, it determines how to treat the arguments, based around the format string, supplied as the first argument.
You're calling:
This:
NSLog(#"%d",k);
This tells NSLog to treat the argument as a signed int (%d). You should be doing this:
NSLog(#"%u",k);
So that NSLog treats the argument as the type that it is: unsigned (%u). See the NSLog documentation.
As it stands, I'd expect your buffer to overrun, trashing memory as the loop runs and your application to crash.
After reflecting, I believe #FreeAsInBeer is correct and you don't want to iterate through the for loop in this situation and you could probably fix this by using signed ints. However, It seems to me like you would be better off, checking len > sizeof(MSG_INFO) and if this isn't the case handling it differently. Most situations I can think of, I wouldn't want to perform any processing after the for loop, if I'd failed to read sufficient information for a message...
I'm not really sure what is going on here, as the loop should never execute. I've loaded up your code, and it seems that the unsigned part of your int declaration is causing the issues. If you remove both of your unsigned specifiers, your code will execute as it should, without ever entering the loop.
Related
what will happen when the integer crosses its limit? The output is 3595 , and how it will come? And it is 2 byte type ?
#include<stdio.h>
#include<conio.h>
void main()
{
int n=12,res=1;
clrscr();
while(n>3)
{
n+=3;
res*=3;
}
printf("%d",n*res);
getch();
}
The program will have undefined behavior.
The condition you gave is non terminating. It's a loop where the condition will never be terminated in a well defined manner.
You will go on multiplying and then once it will overflow. And then if you get a negative result in n or <=3 then it will stop. And in the mean time res has also overflown. As a result you will not be sure how this program behaves. We can't be sure of what the result will be.
The behaviour is undefined - you should not rely on anything specific. Common manifestations on int overflow are:
Wraparound such that 1 + INT_MAX becomes INT_MIN. This is what every Windows PC I have encountered does. The bit pattern produced by the operation matches the unsigned cousin exactly.
Clamping such that 1 + INT_MAX becomes INT_MAX. I last observed this on a machine (with signed magnitude int) running a variant of UNIX in the 1990s.
I am having difficulty understanding why the following code is giving me the numbers below. Can anyone explain this conversion from float to int? (pCLocation is a CGPoint)
counter = 0;
pathCells[counter][0].x = pCLocation.x;
pathCells[counter][0].y = pCLocation.y;
cellCount[counter]++;
NSLog(#"%#",[NSString stringWithFormat:#"pCLocation at:
%f,%f",pCLocation.x,pCLocation.y]);
NSLog(#"%#",[NSString stringWithFormat:#"path cell 0: %i,%i",
pathCells[counter][cellCount[counter-1]].x,pathCells[counter][cellCount[counter]].y]);
2012-03-09 01:17:37.165 50LevelsBeta1[1704:207] pCLocation at: 47.000000,16.000000
2012-03-09 01:17:37.172 50LevelsBeta1[1704:207] path cell 0: 0,1078427648
Assuming your code is otherwise correct:
I think it would help you to understand how NSLog and other printf-style functions work. When you call NSLog(#"%c %f", a_char, a_float), your code pushes the format string and values onto the stack, then jumps to the start of that function's code. Since NSLog accepts a variable number of arguments, it doesn't know how much to pop off the stack yet. It knows at least there is a format string, so it pops that off and begins to scan it. When it finds a format specifier %c, it knows to pop one byte off the stack and print that value. Then it finds %f, so now it knows to pop another 32 bits and print that as a floating point value. Then it reaches the end of the format string, so it's done.
Now here's the kicker: if you lie to NSLog and tell it you are providing a int but actually provide a float, it has no way to know you are lying. It simply assumes you are telling the truth and prints whatever bits it finds in memory however you asked it to be printed.
That's why you are seeing weird values: you are printing a floating point value as though it were an int. If you really want an int value, you should either:
Apply a cast: NSLog(#"cell.x: %i", (int)cell.x);
Leave it a float but use the format string to hide the decimals: NSLog(#"cell.x: %.0f", cell.x);
(Alternate theory, still potentially useful.)
You might be printing out the contents of uninitialized memory.
In the code you've given, counter = 0 and is never changed. So you assign values to:
pathCells[0][0].x = pCLocation.x;
pathCells[0][0].y = pCLocation.y;
cellCount[0]++;
Then you print:
pathCells[0][cellCount[-1]].x
pathCells[0][cellCount[0]].y
I'm pretty sure that cellCount[-1] isn't what you want. C allows this because even though you think of it as working with an array of a specific size, foo[bar] really just means grab the value at memory address foo plus offset bar. So an index of -1 just means take one step back. That's why you don't get a warning or error, just junk data.
You should clarify what pathCells, cellCount, and counter are and how they relate to each other. I think you have a bug in how you are combining these things.
Does it make any difference if I use e.g. short or char type of variable instead of int as a for-loop initializer?
for (int i = 0; i < 10; ++i) {}
for (short i = 0; i < 10; ++i) {}
for (char i = 0; i < 10; ++i) {}
Or maybe there is no difference? Maybe I make the things even worse and efficiency decreases? Does using different type saves memory and increases speed? I am not sure, but I suppose that ++ operator may need to widen the type, and as a result: slow down the execution.
It will not make any difference you should be caring about, provided the range you iterate over fits into the type you choose. Performance-wise, you'll probably get the best results when the size of the iteration variable is the same as the platform's native integer size, but any decent compiler will optimize it to use that anyway. On a managed platform (e.g. C# or Java), you don't know the target platform at compile time, and the JIT compiler is basically free to optimize for whatever platform it is running on.
The only thing you might want to watch out for is when you use the loop counter for other things inside the loop; changing the type may change the way these things get executed, up to the point (in C++ at least) that a different overload for a function or method may get called because the loop variable has a different type. An example would be when you output the loop variable through a C++ stream, like so: cout << i << endl;. Similarly, the type of the loop variable can infest the implicit types of (sub-)expressions that contain it, and lead to hidden overflows in numeric calculations, e.g.: int j = i * i;.
Hello I have a bizarre problem with sprintf. Here's my code:
void draw_number(int number,int height,int xpos,int ypos){
char string_buffer[5]; //5000 is the maximum score, hence 4 characters plus null character equals 5
printf("Number - %i\n",number);
sprintf(string_buffer,"%i",number); //Get string
printf("String - %s\n",string_buffer);
int y_down = ypos + height;
for (int x = 0; x < 5; x++) {
char character = string_buffer[x];
if(character == NULL){ //Blank characters occur at the end of the number from spintf. Testing with NULL works
break;
}
int x_left = xpos+height*x;
int x_right = x_left+height;
GLfloat vertices[] = {x_left,ypos,x_right,ypos,x_left,y_down,x_right,y_down};
rectangle2d(vertices, number_textures[atoi(strcat(&character,"\0"))], full_texture_texcoords);
}
}
With the printf calls there, the numbers are printed successfully and the numbers are drawn as expected. When I take them away, I can't view the output and compare it, of-course, but the numbers aren't rendering correctly. I assume sprintf breaks somehow.
This also happens with NSLog. Adding NSLog's anywhere in the program can either break or fix the function.
What on earth is going on?
This is using Objective-C with the iOS 4 SDK.
Thank you for any answer.
Well this bit of code is definately odd
char character = string_buffer[x];
...
... strcat(&character,"\0") ...
Originally I was thinking that depending on when there happens to be a NUL terminator on the stack this will clober some peice of memory, and could be causing your problems. However, since you're appending the empty string I don't think it will have any effect.
Perhaps the contents of the stack actually contain numbers that atoi is interpretting?Either way I suggest you fix that and see if it solves your issue.
As to how to fix it Georg Fritzsche beat me to it.
With strcat(&character,"\0") you are trying to use a single character as a character array. This will probably result in atoi() returning completely different values from what you're expecting (as you have no null-termination) or simply crash.
To fix the original approach, you could use proper a zero-terminated string:
char number[] = { string_buffer[x], '\0' };
// ...
... number_textures[atoi(number)] ...
But even easier would be to simply use the following:
... number_textures[character - '0'] ...
Don't use NULL to compare against a character, use '\0' since it's a character you're looking for. Also, your code comment sounds surprised, of course a '\0' will occur at the end of the string, that is how C terminates strings.
If your number is ever larger than 9999, you will have a buffer overflow which can cause unpredicable effects.
When you have that kind of problem, instantly think stack or heap corruption. You should dynamically allocate your buffer with enough size- having it as a fixed size is BEGGING for this kind of trouble. Because you don't check that the number is within the max- if you ever had another bug that caused it to be above the max, you'd get this problem here.
Are signed/unsigned mismatches necessarily bad?
Here is my program:
int main(int argc, char *argv[]) {
unsigned int i;
for (i = 1; i < argc; i++) { // signed/unsigned mismatch here
}
}
argc is signed, i is not. Is this a problem?
"signed/unsigned mismatches" can be bad. In your question, you are asking about comparisons. When comparing two values of the same base type, but one signed and one unsigned, the signed value is converted to unsigned. So,
int i = -1;
unsigned int j = 10;
if (i < j)
printf("1\n");
else
printf("2\n");
prints 2, not 1. This is because in i < j, i is converted to an unsigned int. (unsigned int)-1 is equal to UINT_MAX, a very large number. The condition thus evaluates to false, and you get to the else clause.
For your particular example, argc is guaranteed to be non-negative, so you don't have to worry about the "mismatch".
It is not a real problem in your particular case, but the compiler can't know that argc will always have values that will not cause any problems.
Its not bad. I'd fix compiler warnings concerning signed/unsigned mismatch because bad things can happen even if they are unlikely or impossible. When you do have to fix a bug because of signed/unsigned mismatch the compiler is basically saying "I told you so". Don't ignore the warning its there for a reason.
It is only indirectly a problem.
Bad things can happen if you use signed integers for bitwise operations such as &, |, << and >>.
Completely different bad things can happen if you use unsigned integers for arithmetic (underflow, infinite loops when testing if a number is >= 0 etc.)
Because of this, some compilers and static checking tools will issue warnings when you mix signed and unsigned integers in either type of operation (arithmetic or bit manipulation.)
Although it can be safe to mix them in simple cases like your example, if you do that it means you cannot use those static checking tools (or must disable those warnings) which may mean other bugs go undetected.
Sometimes you have no choice, e.g. when doing arithmetic on values of type size_t in memory management code.
In your example I would stick to int, just because it is simpler to have fewer types, and the int is going to be in there anyway as it is the type of the first argument to main().