How do i get the BITS length from NSUintger, NSString - objective-c

I need to get the BIT length from NSUinteger or NSString
How i can get the bit length?
Thanks

If I'm understanding the question correctly (it is kind of odd, but... hey... so am I):
sizeof(NSUInteger) * 8
[aString maximumLengthOfBytesUsingEncoding: ...] * 8
For NSNumber, a subclass of NSValue, things get a little bit trickier. You'll need to call -objCType, then determine the bit length from that.

OP: I really think you need to organize your thoughts and ask a single, coherent question that, at a minimum, gives an overview of what you're trying to accomplish. So far you have asked at least four questions that are all minor variations of each other.
To other people answering this question: From the context of his other questions, he's trying to do some bignum crypto (ala RSA), or some other bignum number theory stuff (needs to do powermod()). Again, based on the context of his other questions, what he's asking in this question is how to do floor(log2(X)) + 1 where X is an arbitrary data type (hence the NSString).

I have a RSA Exponent key value which is supposed to be a biginteger but i have it in NSString/NSdata with full value in(UTF8 encoded)
As Part of RSA encryption , i need to do the following in the Iphone Env
1.I need to find the bit length of the above exponent value
2.I need to do arithmatic operations on exponent and modulus values including PowMod
3.so which data type i can use (uint64_t or NSNUmber or NSUinteger) for arithmatic operations as well as holding the bigint result value.
4.do i need to go for a specfic bigint implementation, can i able to manage with the above existing iphone data types for bigint ?
5. those external bigint implementations expect to port openssl or gmp lib to Iphone ?

Related

Is there any reason to use NSInteger instead of uint8_t with NS_ENUM?

The general standard appears to use NS_ENUM with NSInteger as the base type. Why is this the case? Assuming less than 256 cases (which covers almost any enumeration), is there any reason to use that instead of uint8_t, which could use less memory space? Either imports into Swift fine.
This is different than NS_OPTIONS, where a larger type makes sense, since you shouldn't be doing any bit math with enumerations, and you can use every number representable by the base type as a value.
The answer to the question in the title:
Is there any reason to use NSInteger instead of uint8_t with NS_ENUM?
is probably not.
When declaring an enum in C if no underlying type is specified the compiler is free to choose any suitable type from char and the signed and unsigned integer types which can at least represent all the values required. The current Xcode/Clang compiler picks a 4-byte integer. One could reasonably assume the compiler writers made an informed choice - some balance of performance and storage.
Smaller types, such as uint8_t, will usually be aligned on smaller boundaries in memory (or on disc) - but that is only of benefit if the adjacent field matches the alignment e.g. if a 2-byte size typed field follows a 1-byte sized typed field then unless otherwise specified (e.g. with a #pragma packed) there will probably be an intervening unused byte.
Whether any performance or storage differences are significant will be heavily dependent on the application. Follow the usual rule of thumb - don't optimise until an issue is found.
However if you find semantic benefit in limiting the size then certainly do so - there is no general reason you shouldn't. The choice is similar to picking signed vs. unsigned integers, some programmers avoid unsigned types for values that will be ≥ 0 unless absolutely required for the extra range, while others appreciate the semantic benefit.
Summary: There is no right answer, its largely a subjective issue.
HTH
First of all: The memory footprint is close to completely meaningless. You are talking about 1 Byte vs. 4/8 Bytes. (If the memory alignment does not force the usage of 4/8 bytes whatever you chosed.) How many NS_ENUM (C) objects do you want to have in your running app?
I guess that the reason is pretty easy: NSInteger is akin of "catch all" integer type in Cocoa. That makes assignments easier, especially you do not have to care about assigning a bigger integer type to a smaller one. Without casting this would lead to warnings.
Having more than one integer type in a desktop app with a 32/64 bit model is akin of an anachronism. Nor a Mac neither a MacBook neither an iPhone is an embedded micro controller …
You can use any integer data type including uint8_t with NS_ENUM as.
typedef NS_ENUM(uint8_t, eEnumAddEditViewMode) {
eWBEnumAddMode,
eWBEnumEditMode
};
In old c style standard NSInteger is default, because NSInteger is akin of "catch all" integer type in objective c. and developer can easily type boxing and unboxing with their own variable. This is just developer friendly best practise.

Integer Precision and Conversion Errors

I have been programming Objective-C for only a few weeks. My experience in programming languages such as basic, visual basic, C++ and PHP is much more extensive starting back in 1987 and continuing forward to today. Although, for the last 5 years, I have exclusively coded PHP.
Today, I find myself confused by what I perceive to be bit conversion errors within the Objective-C language. I first noticed this the other day when trying to divide an integer (84) converted to a float by a float (10.0). This produced 8.399999, instead of the 8.400 I was hoping for. I coded a way around the issue and moved on.
Today, I am extracting an (int) 0 from an NSMutableDictionary. I store it first in an NSInteger and second in an int variable. The values should be 0 for both cases, but for both cases, I get the integer value 151229568. (See screenshot)
I remember from my early programming years that we had to worry about the size of the container, because pointing to block of memory with a 32-bit pointer to access a 4-bit value resulted in capturing all the data associated with other values and thus resulted in what appeared to be the wrong number being captured. With implicit memory management and type-conversions becoming the norm, I have not had to worry about this kind of issue for years, and now that I am confronted with it again, I need advice and clarification from programmers who are more familiar with this topic in todays programming environments.
Questions:
Is this a case of incorrect pointer sizing or something else?
What is happening on the back-end to produce this conversion from 0 to another number?
What can I do to get better precision and accuracy from my Objective-C calculations and variable assignments?
Code:
NSInteger hsibs = [keyData objectForKey:#"half_sibs"];
int hsibsi = [keyData objectForKey:#"half_sibs"];
//breakpoint and screen capture of variables in stack
I don't know Objective C all that well, but it looks like the method you use to obtain your data is returning a data type of id (see this reference), and not int
Looks like you either need to cast it or get the integer value in such a manner:
NSInteger hsibs = [[keyData objectForKey:#"half_sibs"] integerValue];
int hsibsi = [[keyData objectForKey:#"half_sibs"] intValue];
and then see if you get the expected results.

Shamir's Secret Sharing using Bignum or Bigint or ....?

I've got a generic cryptographic implementation using OpenSSL's BIGNUM library in C. Standard decryption is working fine, but i would also like to implement Shamir's secret sharing (SSS).
The problem i've run across is that BIGNUM only supports whole numbers, and as part of the Lagrange interpolation for SSS, i'll need to be multiplying by negative values.
Is there any way to do this? Otherwise: I can do my SSS in another language (python?) so long as it is able to interact with the BIGNUM's produced by OpenSSL.
Any suggestions? TIA!
As you look at BIGNUM structure in OpenSSL, you'll find a flag named neg. If the BIGNUM object represents a negative number, neg will be set to 1. Also, the bn_mul() function handles the multiplication by negative number correctly. So you can implement SSS with OpenSSL, no problem!
Modular arithmetic (using groups) only provides positive results, so I presume you want to use non-modular arithmetic? In that case you could simply keep a separate variable indicating if the value is negative or not. The outcome of positive multiplication is the same except for the sign bit anyway.
It's not as clean a design as possible, but for a few methods it would probably not matter that much. You could create separate methods that mimic the BN methods except for an integer holding the value of the sign (-1, 0 or 1).

How are the digits in ObjC method type encoding calculated?

Is is a follow-up to my previous question:
What are the digits in an ObjC method type encoding string?
Say there is an encoding:
v24#0:4:8#12B16#20
How are those numbers calculated? B is a char so it should occupy just 1 byte (not 4 bytes). Does it have something to do with "alignment"? What is the size of void?
Is it correct to calculate the numbers as follows? Ask sizeof on every item and round up the result to multiple of 4? And the first number becomes the sum of all the other ones?
The numbers were used in the m68K days to denote stack layout. That is, you could literally decode the the method signature and, for just about all types, know exactly which bytes at what offset within the stack frame you could diddle to get/set arguments.
This worked because the m68K's ABI was entirely [IIRC -- been a long long time] stack based argument/return passing. There wasn't anything shoved into registers across call boundaries.
However, as Objective-C was ported to other platforms, always-on-the-stack was no longer the calling convention. Arguments and return values are often passed in registers.
Thus, those offsets are now useless. As well, the type encoding used by the compiler is no longer complete (because it never was terribly useful) and there will be types that won't be encoded. Not too mention that encoding some C++ templatized types yields method type encoding strings that can be many Kilobytes in size (I think the record I ran into was around 30K of type information).
So, no, it isn't correct to use sizeof() to generate the numbers because they are effectively meaningless to everything. The only reason why they still exist is for binary compatibility; there are bits of esoteric code here and there that still parse the type encoding string with the expectation that there will be random numbers sprinkled here and there.
Note that there are vestiges of API in the ObjC runtime that still lead one to believe that it might be possible to encode/decode stack frames on the fly. It really isn't as the C ABI doesn't guarantee that argument registers will be preserved across call boundaries in the face of optimization. You'd have to drop to assembly and things get ugly really really fast (>shudder<).
The full encoding string is constructed (in clang) by the method ASTContext::getObjCEncodingForMethodDecl, which you can find in lib/AST/ASTContext.cpp.
The method that does the size rounding is ASTContext::getObjCEncodingTypeSize, in the same file. It forces each size to be at least the size of an int. On all of Apple's current platforms, an int is 4 bytes.
The stack frame size and argument offsets are calculated by the compiler. I'm actually trying to track this down in the Clang source myself this week; it possibly has something to do with CodeGenTypes::arrangeObjCMessageSendSignature. (Looks like Rob just made my life a lot easier!)
The first number is the sum of the others, yes -- it's the total space occupied by the arguments. To get the size of the type represented by an ObjC type encoding in your code, you should use NSGetSizeAndAlignment().

Is there a practical limit to the size of bit masks?

There's a common way to store multiple values in one variable, by using a bitmask. For example, if a user has read, write and execute privileges on an item, that can be converted to a single number by saying read = 4 (2^2), write = 2 (2^1), execute = 1 (2^0) and then add them together to get 7.
I use this technique in several web applications, where I'd usually store the variable into a field and give it a type of MEDIUMINT or whatever, depending on the number of different values.
What I'm interested in, is whether or not there is a practical limit to the number of values you can store like this? For example, if the number was over 64, you couldn't use (64 bit) integers any more. If this was the case, what would you use? How would it affect your program logic (ie: could you still use bitwise comparisons)?
I know that once you start getting really large sets of values, a different method would be the optimal solution, but I'm interested in the boundaries of this method.
Off the top of my head, I'd write a set_bit and get_bit function that could take an array of bytes and a bit offset in the array, and use some bit-twiddling to set/get the appropriate bit in the array. Something like this (in C, but hopefully you get the idea):
// sets the n-th bit in |bytes|. num_bytes is the number of bytes in the array
// result is 0 on success, non-zero on failure (offset out-of-bounds)
int set_bit(char* bytes, unsigned long num_bytes, unsigned long offset)
{
// make sure offset is valid
if(offset < 0 || offset > (num_bytes<<3)-1) { return -1; }
//set the right bit
bytes[offset >> 3] |= (1 << (offset & 0x7));
return 0; //success
}
//gets the n-th bit in |bytes|. num_bytes is the number of bytes in the array
// returns (-1) on error, 0 if bit is "off", positive number if "on"
int get_bit(char* bytes, unsigned long num_bytes, unsigned long offset)
{
// make sure offset is valid
if(offset < 0 || offset > (num_bytes<<3)-1) { return -1; }
//get the right bit
return (bytes[offset >> 3] & (1 << (offset & 0x7));
}
I've used bit masks in filesystem code where the bit mask is many times bigger than a machine word. think of it like an "array of booleans";
(journalling masks in flash memory if you want to know)
many compilers know how to do this for you. Adda bit of OO code to have types that operate senibly and then your code starts looking like it's intent, not some bit-banging.
My 2 cents.
With a 64-bit integer, you can store values up to 2^64-1, 64 is only 2^6. So yes, there is a limit, but if you need more than 64-its worth of flags, I'd be very interested to know what they were all doing :)
How many states so you need to potentially think about? If you have 64 potential states, the number of combinations they can exist in is the full size of a 64-bit integer.
If you need to worry about 128 flags, then a pair of bit vectors would suffice (2^64 * 2).
Addition: in Programming Pearls, there is an extended discussion of using a bit array of length 10^7, implemented in integers (for holding used 800 numbers) - it's very fast, and very appropriate for the task described in that chapter.
Some languages ( I believe perl does, not sure ) permit bitwise arithmetic on strings. Giving you a much greater effective range. ( (strlen * 8bit chars ) combinations )
However, I wouldn't use a single value for superimposition of more than one /type/ of data. The basic r/w/x triplet of 3-bit ints would probably be the upper "practical" limit, not for space efficiency reasons, but for practical development reasons.
( Php uses this system to control its error-messages, and I have already found that its a bit over-the-top when you have to define values where php's constants are not resident and you have to generate the integer by hand, and to be honest, if chmod didn't support the 'ugo+rwx' style syntax I'd never want to use it because i can never remember the magic numbers )
The instant you have to crack open a constants table to debug code you know you've gone too far.
Old thread, but it's worth mentioning that there are cases requiring bloated bit masks, e.g., molecular fingerprints, which are often generated as 1024-bit arrays which we have packed in 32 bigint fields (SQL Server not supporting UInt32). Bit wise operations work fine - until your table starts to grow and you realize the sluggishness of separate function calls. The binary data type would work, were it not for T-SQL's ban on bitwise operators having two binary operands.
For example .NET uses array of integers as an internal storage for their BitArray class.
Practically there's no other way around.
That being said, in SQL you will need more than one column (or use the BLOBS) to store all the states.
You tagged this question SQL, so I think you need to consult with the documentation for your database to find the size of an integer. Then subtract one bit for the sign, just to be safe.
Edit: Your comment says you're using MySQL. The documentation for MySQL 5.0 Numeric Types states that the maximum size of a NUMERIC is 64 or 65 digits. That's 212 bits for 64 digits.
Remember that your language of choice has to be able to work with those digits, so you may be limited to a 64-bit integer anyway.