I need to do bit operations on representations of arbitrary precision numbers in Objective C. So far I have been using NSData objects to hold the numbers - is there a way to bit shift the content of those? If not, is there a different way to achieve this?
Using NSMutableData you can fetch the byte in a char, shift your bits and replace it with -replaceBytesInRange:withBytes:.
I don't see any other solution except for writing your own date holder class using a char * buffer to hold the raw data.
As you'll have spotted, Apple doesn't provide arbitrary precision support. Nothing is provided larger than the 1024-bit integers in vecLib.
I also don't think NSData provides shifts and rolls. So you're going to have to roll your own. E.g. a very naive version, which may have some small errors as I'm typing it directly here:
#interface NSData (Shifts)
- (NSData *)dataByShiftingLeft:(NSUInteger)bitCount
{
// we'll work byte by byte
int wholeBytes = bitCount >> 3;
int extraBits = bitCount&7;
NSMutableData *newData = [NSMutableData dataWithLength:self.length + wholeBytes + (extraBits ? 1 : 0)];
if(extraBits)
{
uint8_t *sourceBytes = [self bytes];
uint8_t *destinationBytes = [newData mutableBytes];
for(int index = 0; index < self.length-1; index++)
{
destinationBytes[index] =
(sourceBytes[index] >> (8-extraBits)) |
(sourceBytes[index+1] << extraBits);
}
destinationBytes[index] = roll >> (8-extraBits);
}
else
/* just copy all of self into the beginning of newData */
return newData;
}
#end
Of course, that assumes the number of bits you want to shift by is itself expressible as an NSUInteger, amongst other sins.
Related
I need to store a series of 1s and 0s of arbitrary length.
I had planned to use ints, but then it occurred to me that really all I need is a bitstream.
NSMutableData seems like just the thing. Except all I see anyone talking about is how to set bytes on it, or store jpegs or strings in it. I need to get way more granular than that.
Given a series of 1s and 0s such as: 110010101011110110, how do I make it into an NSData object--and how do I get it out?
NSData's appendBytes:length: and mutableBytes are all at the byte level, and I need to start lower. Storing those 1s and 0s as bytes doesn't make sense, when the bytes themselves are made of sets of 1s and 0s. I'm having trouble finding anything telling me how to set bits.
Here's some faux code:
NSString *sequence = #"01001010000010"; //(or int sequence, or whatever)
for (...){//iterate through whatever it is--this isn't what I need help with
if ([sequence intOrCharOrWhateverAtIndex: index] == 0) {
//do something to set a bit -- this is what I need help with
} else {
//set the bit the other way -- again, this is what I need help with
}
}
NSData *data = [NSData something]; //wrap it up and save it -- help here too
Do you literally have 1s and 0s? Like... ASCII numerals? I would use NSString to store that. If by 1s and 0s you mean a bunch of bits, then just divide the number of bits by 8 to get the number of bytes and make an NSData of the bytes.
(Editing to add untested code to convert a bitstream to a buffer)
//Assuming the presence of an array of 1s and 0s stored as some numeric type, called bits, and the number of bits in the array stored in a variable called bitsLength
NSMutableData *buffer = [NSMutableData data];
for (int i = 0; i < bitsLength; i += 8) {
char byte = 0;
for (int bit = 0; bit < 8 && i + bit < bitsLength; bit++) {
if (bits[i + bit] > 0) {
byte += (1 << bit);
}
}
[buffer appendBytes:&byte length:1];
}
I got this answer from: Convert Binary to Decimal in Objective C
Basically, I think the question could be phrased, "how do I parse a string representation of a binary number into a primitive number type". The magic is all in strtol.
NSString* b = #"01001010000010";
long v = strtol([b UTF8String], NULL, 2);
long data[1];
data[0] = v;
NSData* d = [NSData dataWithBytes:data length:sizeof(data)];
[d writeToFile:#"test.txt" atomically:YES];
Using this idea, you could split your string into 64 character chunks and convert them to longs.
I've got a really large decimal number in an NSString, which is too large to fit into any variable including NSDecimal. I was doing the math manually, but if I can't fit the number into a variable then I can't be dividing it. So what would be a good way to convert the string?
Example Input: 423723487924398723478243789243879243978234
Output: 4DD361F5A772159224CE9EB0C215D2915FA
I was looking at the first answer here, but it's in C# and I don't know it's objective C equivalent.
Does anyone have any ideas that don't involve using an external library?
If this is all you need, it's not too hard to implement, especially if you're willing to use Objective-C++. By using Objective-C++, you can use a vector to manage memory, which simplifies the code.
Here's the interface we'll implement:
// NSString+BigDecimalToHex.h
#interface NSString (BigDecimalToHex)
- (NSString *)hexStringFromDecimalString;
#end
To implement it, we'll represent an arbitrary-precision non-negative integer as a vector of base-65536 digits:
// NSString+BigDecimalToHex.mm
#import "NSString+BigDecimalToHex.h"
#import <vector>
// index 0 is the least significant digit
typedef std::vector<uint16_t> BigInt;
The "hard" part is to multiply a BigInt by 10 and add a single decimal digit to it. We can very easily implement this as long multiplication with a preloaded carry:
static void insertDecimalDigit(BigInt &b, uint16_t decimalDigit) {
uint32_t carry = decimalDigit;
for (size_t i = 0; i < b.size(); ++i) {
uint32_t product = b[i] * (uint32_t)10 + carry;
b[i] = (uint16_t)product;
carry = product >> 16;
}
if (carry > 0) {
b.push_back(carry);
}
}
With that helper method, we're ready to implement the interface. First, we need to convert the decimal digit string to a BigInt by calling the helper method once for each decimal digit:
- (NSString *)hexStringFromDecimalString {
NSUInteger length = self.length;
unichar decimalCharacters[length];
[self getCharacters:decimalCharacters range:NSMakeRange(0, length)];
BigInt b;
for (NSUInteger i = 0; i < length; ++i) {
insertDecimalDigit(b, decimalCharacters[i] - '0');
}
If the input string is empty, or all zeros, then b is empty. We need to check for that:
if (b.size() == 0) {
return #"0";
}
Now we need to convert b to a hex digit string. The most significant digit of b is at the highest index. To avoid leading zeros, we'll handle that digit specially:
NSMutableString *hexString = [NSMutableString stringWithFormat:#"%X", b.back()];
Then we convert each remaining base-65536 digit to four hex digits, in order from most significant to least significant:
for (ssize_t i = b.size() - 2; i >= 0; --i) {
[hexString appendFormat:#"%04X", b[i]];
}
And then we're done:
return hexString;
}
You can find my full test program (to run as a Mac command-line program) in this gist.
I have function to convert an integer into byte array (for iPhone). To add dynamicity I have allocate the array using malloc. But I think this will leak memory. What's best way to manage this memory,
+ (unsigned char *) intToByteArray:(int)num{
unsigned char * arr = (unsigned char *)
malloc(sizeof(num) * sizeof(unsigned char));
for (int i = sizeof(num) - 1 ; i >= 0; i --) {
arr[i] = num & 0xFF;
num = num >> 8;
}
return arr;
}
When calling,
int x = 500;
unsigned char * bytes = [Util intToByteArray:x];
I want to avoid the call free(bytes) since, the calling function do not know or explicitly knows, the memory is allocated and not freed.
A few things:
The char type (and signed char and unsigned char) all have a size of 1 by definition, so sizeof(unsigned char) is unnecessary.
It looks like you just want to get the byte representation of an int object, if this is the case, it is not necessary to allocate more space for it, simply take the address of the int and cast it to a pointer to unsigned char *. If the byte order is wrong you can use the NSSwapInt function to swap the order of the bytes in the int and then take the address and cast to unsigned char *. For example:
int someInt = 0x12345678;
unsigned char *bytes = (unsigned char *) &someInt;
This cast is legal and reading from bytes is legal up until sizeof(int) bytes are read. This is accessing the “object representation”.
If you insist on using malloc, then you simply need to pass the buffer to free when you are done, as in:
free(bytes);
The name of your method does not imply the correct ownership of the returned buffer. If your method returns something that the caller is responsible for freeing, it is conventional to name the method using new, copy, or sometimes create. A more suitable name would be copyBytesFromInt: or something similar. Otherwise you could have the method accept a pre-allocated buffer and call the method getBytes:fromInt:, for example:
+ (void) getBytes:(unsigned char *) bytes fromInt:(int) num
{
for (int i = sizeof(num) - 1 ; i >= 0; i --) {
bytes[i] = num & 0xFF;
num = num >> 8;
}
}
You could wrap your bytes into a NSData instance:
NSData *data = [NSData dataWithBytesNoCopy:bytes length:sizeof(num) freeWhenDone:YES];
Make sure your method follows the usual object ownership rules.
Just call free(bytes); when you are done with the bytes (either at the end of method or in dealloc of the class)
since you want to avoid the free call, you could wrap your byte[] in a NSData object:
NSData *d = [NSData dataWithBytesNoCopy:bytes length:num freeWhenDone:YES];
The conventional way of handling this is for the caller to pass in an allocated byte buffer. That way the caller is responsible for freeing it. Something like:
int x = 500;
char *buffer = malloc(x * sizeof(char));
[Util int:x toByteArray:buffer];
…
free(buffer);
I would also consider creating an NSData to hold the bytes, this would take care of memory management for you, while still allowing you to alter the byte buffer:
+ (NSData *) intToByteArray:(int)num {
unsigned char * arr = (unsigned char *)
malloc(sizeof(num) * sizeof(unsigned char));
for (int i = sizeof(num) - 1 ; i >= 0; i --) {
arr[i] = num & 0xFF;
num = num >> 8;
}
return [NSData dataWithBytesNoCopy:arr length:num freeWhenDone:YES];
}
Using foundation and cocoa frameworks on Mac, I am trying to convert an NSData object in humanly understandable number.
Let say the NSData object is an image of NPIXEL. I know the binary data are coded in big endian and represent 32 bit integer (to be more precise 32 bit two complements integer). I write the piece of code bellow to convert the NSData into an int array. But the value I got are completely wrong (this does not means the measurement are bad, I used a special software to read the data and the value given by the software are different from the one I got with my code).
-(int *) GetArrayOfLongInt
{
//Get the total number of element into the Array
int Nelements=[self NPIXEL];
//CREATE THE ARRAY
int array[Nelements];
//FILL THE ARRAY
int32_t intValue;
int32_t swappedValue;
double Value;
int Nbit = abs(BITPIX)*GCOUNT*(PCOUNT + Nelements); Nbit/=sizeof(int32_t);
int i=0;
int step=sizeof(int32_t);
for(int bit=0; bit < Nbit; bit+=step)
{
[Img getBytes:&swappedValue range:NSMakeRange(bit,step)];
intValue= NSSwapBigIntToHost(swappedValue);
array[i]=intValue;
i++;
}
return array;
}
This piece of code (with minor change) work perfectly when the binary data represent float or double, but I dont when it is 16,32 or 64 bit integer. I also tried changingNSSapBigIntToHostintoNSSwapLittleInttoHost`. I even tried with long, but the results is still the same, I got bad values. What wrong I am doing ?
PS: Some of the variable in my code are already set elsewhere in my program. BITPIX is the bit size of each pixel. In this case 32. GCOUNT is equal to 1, PCOUNT 0 and Nelements is the total number of pixel I should have in my image.
Returning a pointer to a local variable is a very bad idea. array could get overwritten at any time (or if you were to write through the pointer, you could corrupt the stack). You probably want something like:
// CREATE THE ARRAY
int *array = malloc(Nelements * sizeof(int));
Your algorithm seems a bit overkill, too. Why not just copy out the whole array from the NSData object, and then byteswap the entries in place? Something like:
int32_t length = [Img length];
int32_t *array = malloc(length);
[Img getBytes:array length:length];
for (i = 0; i < length/sizeof(int32_t); i++)
{
array[i] = NSSwapBigIntToHost(array[i]);
}
I'm attempting conversion of a legacy C++ program to objective-C. The program needs an array of the 256 possible ASCII characters (8-bits per character). I'm attempting to use the NSString method initWithBytes:length:encoding: to do so. Unfortunately, when coded as shown below, it crashes (although it compiles).
NSString* charasstring[256];
unsigned char char00;
int temp00;
for (temp00 = 0; temp00 <= 255; ++temp00)
{
char00 = (unsigned char)temp00;
[charasstring[temp00] initWithBytes:&char00 length:1 encoding:NSASCIIStringEncoding];
}
What I'm missing?
First, the method is simply initWithBytes:length:encoding and not the NSString::initWithBytes you used in the title. I point this out only because forgetting everything you know from C++ is your first step towards success with Objective-C. ;)
Secondly, your code demonstrates that you don't understand Objective-C or use of the Foundation APIs.
you aren't allocating instances of NSString anywhere
you declared an array of 256 NSString instance pointers, probably not what you want
a properly encoded ASCII string does not include all of the bytes
I would suggest you start here.
To solve that specific problem, the following code should do the trick:
NSMutableArray* ASCIIChars = [NSMutableArray arrayWithCapacity:256];
int i;
for (i = 0; i <= 255; ++i)
{
[ASCIIChars addObject:[NSString stringWithFormat:#"%c", (unsigned char)i]];
}
To be used, later on, as follows:
NSString* oneChar = [ASCIIChars objectAtIndex:32]; // for example
However, if all you need is an array of characters, you can just use a simple C array of characters:
unsigned char ASCIIChars [256];
int i;
for (i = 0; i <= 255; ++i)
{
ASCIIChars[i] = (unsigned char)i;
}
To be used, later on, as follows:
unsigned char c = ASCIIChars[32];
The choice will depend on how you want to use that array of characters.