Initially I thought the code presented below was working, the "inBuffer" seems to be correctly getting 4-bytes of data, also the variable MDD_times is correct.
NSData *inBuffer;
float MDD_times;
// FLOAT_002
inBuffer = [inFile readDataOfLength:sizeof(float)];
[inBuffer getBytes: &MDD_times length:sizeof(float)];
NSLog(#"Time: %f", MDD_times);
OK let me expand on this little (code above updated), this is what I am getting:
inBuffer = <3d2aaaab>
MDD_times = -1.209095e-12 (this will be 0.0416667 bigEndian)
NSLog(#"Time: %f", MDD_times) = Time: -0.000000
Its probably NSLog that can't accommodate the float value, I flipped the bytes in the float to bigEndian and the expected value "0.0416667" displays just fine. AT least I know the NSData > float bit is working as intended.
gary
Here's some code I have to do this at a given offset in a buffer. This should work regardless of host endianness when the file is in big endian format.
union intToFloat
{
uint32_t i;
float fp;
};
+(float)floatAtOffset:(NSUInteger)offset
inData:(NSData*)data
{
assert([data length] >= offset + sizeof(float));
union intToFloat convert;
const uint32_t* bytes = [data bytes] + offset;
convert.i = CFSwapInt32BigToHost(*bytes);
const float value = convert.fp;
return value;
}
If you’re sure that the inFile returns data that was encoded with the same type of float and the same endianness, your code should work as expected.
Related
I have this method which extracts data from NSData at a specific pointer. The method only extracts a certain amount of bytes, in this case it is 4 bytes as I return a uint32.
I pass in a pointer (int start) which is used to create the location for an NSRange, the length of the range is the size of a uint32, which creates the range as 4 bytes long.
This works perfectly fine, until the pointer gets to 2147483648. When it gets to this value, the range is not created with 2147483648 for the location value instead it is created as 18446744071562067968 which is out of bounds for the data, and causes an exception to occur halting my program which stops it from reading the rest of the data.
I have no idea what is causing it do what its doing, the start value is the correct value when it is passed into the method, but it changes when the range is created. It does not happen for any of the previous pointer values.
Have I done something silly in my code? Or is it a different problem? Help will be appreciated.
Thank you.
- (uint32)getUINT32ValueFromData:(NSData *)rawData pointer:(int)start {
uint32 value;
NSRange range;
int length = sizeof(uint32);
NSUInteger dataLength = rawData.length;
NSData *currentData;
NSUInteger remainingBytes = dataLength - start;
if (remainingBytes > length) {
range.location = start;
range.length = length;
//should be 2147483648, location in range is showing 18446744071562067968 which is out of bounds...
currentData = [rawData subdataWithRange:range];
uint32 hostData = CFSwapInt32BigToHost(*(const uint32 *)[currentData bytes]);
value = hostData;
pointer = start + length;
}
else
{
NSLog(#"Data Length Exceeded!");
}
return value;
}
It's seems to be an 32/64 bit and signed/unsigned mismatch issue.
You're using three different types
int is a 32 bit signed type
uint32 is a 32 bit unsigned type
NSUInteger is a 32/64 bit unsigned type depending on the processor architecture.
unit32 for the value is fine, but you should use NSUInteger for the offset into the NSData object.
So I'm working on processing audio with Objective C, and am attempting to write a gain change function. I have limited the accepted audio formats to 16-bit AIFF files only for now. The process I am using is straightforward: I grab the audio data from my AIFF object, I skip to the point in the audio where I want to process (if x1: 10 and x2: 20 the goal is to change the amplitude of the samples from 10 seconds into the audio to 20 seconds in), and then step through the samples applying the gain change through multiplication. The problem is after I write the processed samples to a new NSMutableData, and then write a new AIFF file using the sound data, the processed samples are completely messed up, and the audio is basically just noise.
-(NSMutableData *)normalizeAIFF:(AIFFAudio *)audio x1:(int)x1 x2:(int)x2{
// obtain audio data bytes from AIFF object
SInt16 * bytes = (SInt16 *)[audio.ssndData bytes];
NSUInteger length = [audio.ssndData length] / sizeof(SInt16);
NSMutableData *newAudio = [[NSMutableData alloc] init];
int loudestSample = [self findLoudestSample:audio.ssndData];
// skip offset and blocksize in SSND data and proceed to user selected point
// For 16 bit, 44.1 audio, each second of sound data holds 88.2 thousand samples
int skipTo = 4 + (x1 * 88200);
int processChunk = ((x2 - x1) * 88200) + skipTo;
for(int i = skipTo; i < processChunk; i++){
// convert to float format for processing
Float32 sampleFloat = (Float32)bytes[i];
sampleFloat = sampleFloat / 32768.0;
// This is where I would change the amplitude of the sample
// sampleFloat = sampleFloat + (sampleFloat * 0.5);
// make sure not clipping
if (sampleFloat > 1.0){
sampleFloat = 1.0;
} else if (sampleFloat < -1.0){
sampleFloat = -1.0;
}
// convert back to SInt16
sampleFloat = sampleFloat * 32768.0;
if (sampleFloat > 32767.0){
sampleFloat = 32767.0;
} else if (sampleFloat < -32768.0){
sampleFloat = -32768.0;
}
bytes[i] = (SInt16)sampleFloat;
}
[newAudio appendBytes:bytes length:length];
return newAudio;
}
Where in this process could I be going wrong? Is it converting the sample from SInt16 -> float -> SInt16? Printing the data before during and after this conversion seems to show that there is nothing going wrong there. It seems to be after I pack it back into an NSMutableData object, but I'm not too sure.
Any help is appreciated.
EDIT: I also want to mention when I send audio through this function and set the change gain factor to 0 such that the resulting waveform is identical to the input, there are no issues. The waveform comes out looking and sounding exactly the same. It is only when the change gain factor is set to a value that actually changes the samples.
EDIT2: I changed the code to use a pointer and a type cast rather than memcpy(). I still am getting weird results when multiplying the floating point representation of the sample by any number. When I multiply the sample as an SInt16 by an integer I get the proper result, though. This leads me to believe my problem lies in the way I am going about floating point arithmetic. Is there anything anyone sees with the floating point equation I commented out that could be leading to errors?
The problem turned out to be an endianness issue as Zaph alluded to. I thought I was handling the conversion of big-endian to little-endian correctly when I was not. Now the code looks like:
-(NSMutableData *)normalizeAIFF:(AIFFAudio *)audio x1:(int)x1 x2:(int)x2{
// obtain audio data bytes from AIFF object
SInt16 * bytes = (SInt16 *)[audio.ssndData bytes];
NSUInteger length = [audio.ssndData length];
NSMutableData *newAudio = [[NSMutableData alloc] init];
// skip offset and blocksize in SSND data and proceed to user selected point
// For 16 bit, 44.1 audio, each second of sound data holds 88.2 thousand samples
int skipTo = 4 + (x1 * 88200);
int processChunk = ((x2 - x1) * 88200) + skipTo;
for(int i = skipTo; i < processChunk; i++){
SInt16 sample = CFSwapInt16BigToHost(bytes[i]);
bytes[i] = CFSwapInt16HostToBig(sample * 0.5);
}
[newAudio appendBytes:bytes length:length];
return newAudio;
}
The gain change factor of 0.5 will change, and I still have to actually normalize the data in relation to the sample with the greatest amplitude in the selection, but the issue I had is solved. When writing the new waveform out to a file it sounds and looks as expected.
I am trying to convert a nsstring with hex values into a float value.
NSString *hexString = #"3f9d70a4";
The float value should be = 1.230.
Some ways I have tried to solve this are:
1.NSScanner
-(unsigned int)strfloatvalue:(NSString *)str
{
float outVal;
NSString *newStr = [NSString stringWithFormat:#"0x%#",str];
NSScanner* scanner = [NSScanner scannerWithString:newStr];
NSLog(#"string %#",newStr);
bool test = [scanner scanHexFloat:&outVal];
NSLog(#"scanner result %d = %a (or %f)",test,outVal,outVal);
return outVal;
}
results:
string 0x3f9d70a4
scanner result 1 = 0x1.fceb86p+29 (or 1067282624.000000)
2.casting pointers
NSNumber * xPtr = [NSNumber numberWithFloat:[(NSNumber *)#"3f9d70a4" floatValue]];
result:3.000000
What you have is not a "hexadecimal float", as is produced by the %a string format and scanned by scanHexFloat: but the hexadecimal representation of a 32-bit floating-point value - i.e. the actual bits.
To convert this back to a float in C requires messing with the type system - to give you access to the bytes that make up a floating-point value. You can do this with a union:
typedef union { float f; uint32_t i; } FloatInt;
This type is similar to a struct but the fields are overlaid on top of each other. You should understand that doing this kind of manipulation requires you understand the storage formats, are aware of endian order, etc. Do not do this lightly.
Now you have the above type you can scan a hexadecimal integer and interpret the resultant bytes as a floating-point number:
FloatInt fl;
NSScanner *scanner = [NSScanner scannerWithString:#"3f9d70a4"];
if([scanner scanHexInt:&fl.i]) // scan into the i field
{
NSLog(#"%x -> %f", fl.i, fl.f); // display the f field, interpreting the bytes of i as a float
}
else
{
// parse error
}
This works, but again consider carefully what you are doing.
HTH
I think a better solutions is a workaround like this :
-(float) getFloat:(NSInteger*)pIndex
{
NSInteger index = *pIndex;
NSData* data = [self subDataFromIndex:&index withLength:4];
*pIndex = index;
uint32_t hostData = CFSwapInt32BigToHost(*(const uint32_t *)[data bytes]);
return *(float *)(&hostData);;
}
Where your parameter is an NSData which rapresents the number in HEX format, and the input parameter is a pointer to the element of NSData.
So basically you are trying to make an NSString to C's float, there's an old fashion way to do that!
NSString* hexString = #"3f9d70a4";
const char* cHexString = [hexString UTF8String];
long l = strtol(cHexString, NULL, 16);
float f = *((float *) &l);
// f = 1.23
for more detail please see this answer
I need to store a series of 1s and 0s of arbitrary length.
I had planned to use ints, but then it occurred to me that really all I need is a bitstream.
NSMutableData seems like just the thing. Except all I see anyone talking about is how to set bytes on it, or store jpegs or strings in it. I need to get way more granular than that.
Given a series of 1s and 0s such as: 110010101011110110, how do I make it into an NSData object--and how do I get it out?
NSData's appendBytes:length: and mutableBytes are all at the byte level, and I need to start lower. Storing those 1s and 0s as bytes doesn't make sense, when the bytes themselves are made of sets of 1s and 0s. I'm having trouble finding anything telling me how to set bits.
Here's some faux code:
NSString *sequence = #"01001010000010"; //(or int sequence, or whatever)
for (...){//iterate through whatever it is--this isn't what I need help with
if ([sequence intOrCharOrWhateverAtIndex: index] == 0) {
//do something to set a bit -- this is what I need help with
} else {
//set the bit the other way -- again, this is what I need help with
}
}
NSData *data = [NSData something]; //wrap it up and save it -- help here too
Do you literally have 1s and 0s? Like... ASCII numerals? I would use NSString to store that. If by 1s and 0s you mean a bunch of bits, then just divide the number of bits by 8 to get the number of bytes and make an NSData of the bytes.
(Editing to add untested code to convert a bitstream to a buffer)
//Assuming the presence of an array of 1s and 0s stored as some numeric type, called bits, and the number of bits in the array stored in a variable called bitsLength
NSMutableData *buffer = [NSMutableData data];
for (int i = 0; i < bitsLength; i += 8) {
char byte = 0;
for (int bit = 0; bit < 8 && i + bit < bitsLength; bit++) {
if (bits[i + bit] > 0) {
byte += (1 << bit);
}
}
[buffer appendBytes:&byte length:1];
}
I got this answer from: Convert Binary to Decimal in Objective C
Basically, I think the question could be phrased, "how do I parse a string representation of a binary number into a primitive number type". The magic is all in strtol.
NSString* b = #"01001010000010";
long v = strtol([b UTF8String], NULL, 2);
long data[1];
data[0] = v;
NSData* d = [NSData dataWithBytes:data length:sizeof(data)];
[d writeToFile:#"test.txt" atomically:YES];
Using this idea, you could split your string into 64 character chunks and convert them to longs.
Using foundation and cocoa frameworks on Mac, I am trying to convert an NSData object in humanly understandable number.
Let say the NSData object is an image of NPIXEL. I know the binary data are coded in big endian and represent 32 bit integer (to be more precise 32 bit two complements integer). I write the piece of code bellow to convert the NSData into an int array. But the value I got are completely wrong (this does not means the measurement are bad, I used a special software to read the data and the value given by the software are different from the one I got with my code).
-(int *) GetArrayOfLongInt
{
//Get the total number of element into the Array
int Nelements=[self NPIXEL];
//CREATE THE ARRAY
int array[Nelements];
//FILL THE ARRAY
int32_t intValue;
int32_t swappedValue;
double Value;
int Nbit = abs(BITPIX)*GCOUNT*(PCOUNT + Nelements); Nbit/=sizeof(int32_t);
int i=0;
int step=sizeof(int32_t);
for(int bit=0; bit < Nbit; bit+=step)
{
[Img getBytes:&swappedValue range:NSMakeRange(bit,step)];
intValue= NSSwapBigIntToHost(swappedValue);
array[i]=intValue;
i++;
}
return array;
}
This piece of code (with minor change) work perfectly when the binary data represent float or double, but I dont when it is 16,32 or 64 bit integer. I also tried changingNSSapBigIntToHostintoNSSwapLittleInttoHost`. I even tried with long, but the results is still the same, I got bad values. What wrong I am doing ?
PS: Some of the variable in my code are already set elsewhere in my program. BITPIX is the bit size of each pixel. In this case 32. GCOUNT is equal to 1, PCOUNT 0 and Nelements is the total number of pixel I should have in my image.
Returning a pointer to a local variable is a very bad idea. array could get overwritten at any time (or if you were to write through the pointer, you could corrupt the stack). You probably want something like:
// CREATE THE ARRAY
int *array = malloc(Nelements * sizeof(int));
Your algorithm seems a bit overkill, too. Why not just copy out the whole array from the NSData object, and then byteswap the entries in place? Something like:
int32_t length = [Img length];
int32_t *array = malloc(length);
[Img getBytes:array length:length];
for (i = 0; i < length/sizeof(int32_t); i++)
{
array[i] = NSSwapBigIntToHost(array[i]);
}