how to test if unsigned __int64 number exceeds range - testing

I have a function where calculated values can reach higher values than the range of unsigned __int64 which is indicated by MS by 18,446,744,073,709,551,615. How can I test if a number has exceeded that range? I've converted the int into char and tried testing by checking the length with strlen. However, some values with a length longer than specified: for example if(strlen(charvar)>17) mysteriously escape. So how can I effectively test?

If you can use a modern compiler or Boost, then lexical_cast will do the job:
uint64_t bigint;
try {
bigint = lexical_cast<uint64_t>(str);
} catch (std::bad_lexical_cast &e) {
// do whatever you want to do when the string isn't valid;
}
// Safely use bigint
See this link for the Boost library. You can definitely get this for VS 2008.
If this is Windows only you can also look at _atoi64 and the like. See msdn. These return I64_MAX and I64_MIN in case of over/underflow.

Related

Do-While Loop in C doesn't repeat

This code doesn't repeat if I answer a negative number like "-1.01". How can I make it loop so that it will ask again for c?
#include <stdio.h>
main()
{
float c;
do {
printf("O hai! How much change is owed? ");
scanf("%.2f", &c);
} while (c < 0.0);
return(0);
}
The format strings for scanf are subtly different than those for printf. You are only allowed to have (as per C11 7.21.6.2 The fscanf function /3):
an optional assignment-suppressing character *.
an optional decimal integer greater than zero that specifies the maximum field width (in characters).
an optional length modifier that specifies the size of the receiving object.
a conversion specifier character that specifies the type of conversion to be applied.
Hence your format specifier becomes illegal the instant it finds the . character, which is not one of the valid options. As per /13 of that C11 section listed above:
If a conversion specification is invalid, the behaviour is undefined.
For input, you're better off using the most basic format strings so that the format is not too restrictive. A good rule of thumb in I/O is:
Be liberal in what you accept, specific in what you generate.
So, the code is better written as follows, including what a lot of people ignore, the possibility that the scanf itself may fail, resulting in an infinite loop:
#include <stdio.h>
int main (void) {
float c;
do {
printf ("O hai! How much change is owed? ");
if (scanf ("%f", &c) != 1) {
puts ("Error getting a float.");
break;
}
} while (c < 0.0f);
return 0;
}
If you're after a more general purpose input solution, where you want to allow the user to input anything, take care of buffer overflow, handle prompting and so on, every C developer eventually comes up with the idea that the standard ways of getting input all have deficiencies. So they generally go write their own so as to get more control.
For example, here's one that provides all that functionality and more.
Once you have the user's input as a string, you can examine and play with it as much as you like, including doing anything you would have done with scanf, by using sscanf instead (and being able to go back and do it again and again if initial passes over the data are unsuccessful).
scanf("%.2f", &c );
// ^^ <- This seems unnecessary here.
Please stick with the basics.
scanf("%f", &c);
If you want to limit your input to 2 digits,
scanf("%2f", &c);

xmlNewCDataBlock implicit conversion to int

I'm parsing xml via libxml2 library. After updating Xcode to 5.1, I got warning that last parameter - length - is implicitly converted to int, while it's unsigned long.
Here's function declaration:
XMLPUBFUN xmlNodePtr XMLCALL
xmlNewCDataBlock(xmlDocPtr doc,
const xmlChar *content,
int len);
Is there any similar function that takes unsigned long values, because I don't know how big my data can be, and I want to process it safely.
There's no such function. libxml2's string manipulation functions use ints for string lengths and offsets, so text nodes longer than INT_MAX are not supported.

Unexpected looping in Objective C

I have this code:
unsigned int k=(len - sizeof(MSG_INFO));
NSLog(#"%d",k);
for( unsigned int ix = 0; ix < k; ix++)
{
m_pOutPacket->m_buffer[ix] = (char)(pbuf[ix + sizeof(MSG_INFO)]);
}
The problem is, when:
len = 0 and sizeof(MSG_INFO)=68;
k=-68;
This condition gets into the for loop and is continuing for infinite times.
Your code says: unsigned int k. So k isn't -68, it's unsigned. This makes k a very big number, based around a 4 byte int, it would be 4294967210. This is obviously quite a lot more than 0, so it's going to take your for loop a while to get that high, although it would terminate eventually.
The reason you think that it's -86, is that when you print it out with a function like NSLog, it has no direct knowledge about the arguments passed in, it determines how to treat the arguments, based around the format string, supplied as the first argument.
You're calling:
This:
NSLog(#"%d",k);
This tells NSLog to treat the argument as a signed int (%d). You should be doing this:
NSLog(#"%u",k);
So that NSLog treats the argument as the type that it is: unsigned (%u). See the NSLog documentation.
As it stands, I'd expect your buffer to overrun, trashing memory as the loop runs and your application to crash.
After reflecting, I believe #FreeAsInBeer is correct and you don't want to iterate through the for loop in this situation and you could probably fix this by using signed ints. However, It seems to me like you would be better off, checking len > sizeof(MSG_INFO) and if this isn't the case handling it differently. Most situations I can think of, I wouldn't want to perform any processing after the for loop, if I'd failed to read sufficient information for a message...
I'm not really sure what is going on here, as the loop should never execute. I've loaded up your code, and it seems that the unsigned part of your int declaration is causing the issues. If you remove both of your unsigned specifiers, your code will execute as it should, without ever entering the loop.

How do I implement a bit array in C / Objective C

iOS / Objective-C: I have a large array of boolean values.
This is an inefficient way to store these values – at least eight bits are used for each element when only one is needed.
How can I optimise?
see CFMutableBitVector/CFBitVector for a CFType option
Try this:
#define BITOP(a,b,op) \
((a)[(size_t)(b)/(8*sizeof *(a))] op ((size_t)1<<((size_t)(b)%(8*sizeof *(a)))))
Then for any array of unsigned integer elements no larger than size_t, the BITOP macro can access the array as a bit array. For example:
unsigned char array[16] = {0};
BITOP(array, 40, |=); /* sets bit 40 */
BITOP(array, 41, ^=); /* toggles bit 41 */
if (BITOP(array, 42, &)) return 0; /* tests bit 42 */
BITOP(array, 43, &=~); /* clears bit 43 */
etc.
You use the bitwise logical operations and bit-shifting. (A Google search for these terms might give you some examples.)
Basically you declare an integer type (including int, char, etc.), then you "shift" integer values to the bit you want, then you do an OR or an AND with the integer.
Some quick illustrative examples (in C++):
inline bool bit_is_on(int bit_array, int bit_number)
{
return ((bit_array) & (1 << bit_number)) ? true : false;
}
inline void set_bit(int &bit_array, int bit_number)
{
bit_array |= (1 << bit_number);
}
inline void clear_bit(int &bit_array, int bit_number)
{
bit_array &= ~(1 << bit_number);
}
Note that this provides "bit arrays" of constant size (sizeof(int) * 8 bits). Maybe that's OK for you, or maybe you will want to build something on top of this. (Or re-use whatever some library provides.)
This will use less memory than bool arrays... HOWEVER... The code the compiler generates to access these bits will be larger and slower. So unless you have a large number of objects that need to contain these bit arrays, it might have a net-negative impact on both speed and memory usage.
#define BITOP(a,b,op) \
((a)[(size_t)(b)/(8*sizeof *(a))] op (size_t)1<<((size_t)(b)%(8*sizeof *(a))))
will not work ...
Fix:
#define BITOP(a,b,op) \
((a)[(size_t)(b)/(8*sizeof *(a))] op ((size_t)1<<((size_t)(b)%(8*sizeof *(a)))))
I came across this question as I am writing a bit array framework that is intent to manage large amounts of 'bits' similar to Java BitSet. I was looking to see if the name I decided on was in conflict with other Objective-C frameworks.
Anyway, I'm just starting this and am deciding whether to post it on SourceForge or other open source hosting sites.
Let me know if you are interested
Edit: I've created the project, called BitArray, on SourceForge. The source is in the SF SVN repository and I've also uploaded a compiled framework. This LINK will get your there.
Frank

C89: signed/unsigned mismatch

Are signed/unsigned mismatches necessarily bad?
Here is my program:
int main(int argc, char *argv[]) {
unsigned int i;
for (i = 1; i < argc; i++) { // signed/unsigned mismatch here
}
}
argc is signed, i is not. Is this a problem?
"signed/unsigned mismatches" can be bad. In your question, you are asking about comparisons. When comparing two values of the same base type, but one signed and one unsigned, the signed value is converted to unsigned. So,
int i = -1;
unsigned int j = 10;
if (i < j)
printf("1\n");
else
printf("2\n");
prints 2, not 1. This is because in i < j, i is converted to an unsigned int. (unsigned int)-1 is equal to UINT_MAX, a very large number. The condition thus evaluates to false, and you get to the else clause.
For your particular example, argc is guaranteed to be non-negative, so you don't have to worry about the "mismatch".
It is not a real problem in your particular case, but the compiler can't know that argc will always have values that will not cause any problems.
Its not bad. I'd fix compiler warnings concerning signed/unsigned mismatch because bad things can happen even if they are unlikely or impossible. When you do have to fix a bug because of signed/unsigned mismatch the compiler is basically saying "I told you so". Don't ignore the warning its there for a reason.
It is only indirectly a problem.
Bad things can happen if you use signed integers for bitwise operations such as &, |, << and >>.
Completely different bad things can happen if you use unsigned integers for arithmetic (underflow, infinite loops when testing if a number is >= 0 etc.)
Because of this, some compilers and static checking tools will issue warnings when you mix signed and unsigned integers in either type of operation (arithmetic or bit manipulation.)
Although it can be safe to mix them in simple cases like your example, if you do that it means you cannot use those static checking tools (or must disable those warnings) which may mean other bugs go undetected.
Sometimes you have no choice, e.g. when doing arithmetic on values of type size_t in memory management code.
In your example I would stick to int, just because it is simpler to have fewer types, and the int is going to be in there anyway as it is the type of the first argument to main().