I'm currently trying to create a 16 bit/channel or 48bpp image as an array in Objective-C, then put it in a NSBitmapImageRep. For a start, I want to just fill it with noise to see if it works, so I'm using a C for loop, and for some reason it creates a Segmentation Fault 11, and I can't see how that's possible. I've tried this EXACT code snippet in a simple .c file, and it works absolutely fine. By the way this is happening in the main function of the app, I want it to be done from a method on the click of a button, but don't know how I'd access the array from there as it is not global(could do with a couple of tips about that too).
Also i've tried using uint8_t instead of 16 bit pixels, doesn't make a difference.
int sizeElements = 1880 * 1056 * 3;
int sizeBytes = sizeElements * sizeof(uint16_t);
uint16_t * imagearray = (uint16_t *)malloc(sizeBytes);
for (int i = 1; i < sizeElements; i++)
{
imagearray[i] = rand() % 65536;
}
NSBitmapImageRep * theBitmap = [
[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: (unsigned char * _Nullable * _Nullable)imagearray
pixelsWide: 1880
pixelsHigh: 1056
bitsPerSample: 16
samplesPerPixel: 3
hasAlpha: NO
isPlanar: NO
colorSpaceName: #"NSDeviceRGBColorSpace"
bitmapFormat: 0
bytesPerRow: 1880 * 3 * 2
bitsPerPixel: 48
];
Thanks to anyone who knows, this has been hurting me for hours :(
edit: initialising with NULL as bitmap data planes gets rid of the fault, but so does removing the for loop. Why is this and how can i get it working?
EDIT: adding & like this: &imagearray helped, thanks #everyoneWhoAnswered
The array of planes is not the pixel data, it's an array of pointers to pixel data. You need to put another array in between.
In your case, since your image is not planar, that's an array with the pixel data pointer in its first element, and the others set to NULL.
Related
I have a question now about using standard deviation.
And if I'm using it properly for my case as laid out below.
The Indexes are all Unique
here's a few questions I have about Standard Deviation:
1) Since I'm using all of the data should I be using a population Standard Dev or
should I use a sample Standard Dev?
2) Does it matter what the length (range) of the full playlist is (1...15)
I have a program which takes a Playlist of Songs and gets recommendations
for each song from Spotify.
Say the playlist has a length of 15.
Each tracks gets an a array of Suggestions of about 30 tracks.
And in the end my program will filter down all of the suggestions to
create a new playlist of only 15 tracks.
There is often duplicates that get recommended.
I have devised a method for finding these duplicates and
then putting their index into a NSIndexSet.
In my example there is a duplicate track that was suggested for tracks
in the original playlist at indexes 4, 6, 7, 12
I'm trying to calculate out which is the best one of the duplicates to pick.
All of the NSSet methods etc were not going to help me an would not take
into account "where" the duplicates where place. To me it makes sense
that the more ofter within a "zone" a track was suggested would make the
most sense to "use" it at that location in the final suggested playist.
Originally I was just selecting the index closest to the mean (7.25)
But to me I would think that 6 would be a better choice than 7.
The 12 seems to throw it off.
So I started to investigating StdDev and figured that could help me out
How do you think my approach to this here is?
NSMutableIndexSet* dupeIndexsSet; // contains indexes 4,6,7,12
// I have an extension on NSIndexSet to create a NSArray from it
NSArray* dupesIndexSetArray = [dupeIndexsSet indexSetAsArray];
// #[4, 6, 7, 12]
NSUInteger dupeIndexsCount = [dupeIndexSetArray count]; // 4
NSUInteger dupeIndexSetFirst = [dupeIndexsSet firstIndex]; // 4
NSUInteger dupeIndexSetLast = [dupeIndexsSet lastIndex]; // 12
// I have an extension on NSArray to calculate the mean
NSNumber* dupeIndexsMean = [dupeIndexArray meanOf]; // 7.25;
the populationSD is 2.9475
the populationVariance is 8.6875
the sampleSD is 3.4034
the sampleVariance is 11.5833
Which SD should I use?
Or will it matter
I learned that the SD is the range from the Mean
so I figured I would calculate out what those values are.
double mean = [dupeIndexsMean doubleValue];
double dev = [populationSD doubleValue];
NSUInteger stdDevRangeStart = MAX(round(mean - dev), dupeIndexSetFirst);
// 7.25 - 2.8475 = 4.4025, round 4, dupeIndexSetFirst = 4
// stdDevRangeStart = 4;
NSUInteger stdDevRangeEnd = MIN(round(mean + dev), dupeIndexSetLast);
// 7.25 + 2.8475 = 10.0975, round 10, dupeIndexSetLast = 12
// stdDevRangeEnd = 10;
NSUInteger stdDevRangeLength1 = stdDevRangeEnd - stdDevRangeStart; // 6
NSUInteger stdDevRangeLength2 = MAX(round(dev * 2), stdDevRangeLength1);
// 2.8475 * 2 = 5.695, round 6, stdDevRangeLength1 = 6
// stdDevRangeLength2 = 6;
NSRange dupeStdDevRange = NSMakeRange(stdDevRangeStart, stdDevRangeLength2);
// startIndex = 4, length 6
So I figured if this new range would give me a better range that
would include a more accurate stdDev and not include the 12.
I create a newIndexSet from the original one that only includes the indexes
that are included from my new dupeStdDevRange
NSMutableIndexSet* stdDevIndexSet = [NSMutableIndexSet new];
[dupeIndexsSet enumerateIndexesInRange:dupeStdDevRange
options:NSEnumerationConcurrent
usingBlock:^(NSUInteger idx, BOOL * _Nonnull stop)
{
[stdDevIndexSet addIndex:idx];
}];
// stdDevIndexSet has indexes = 4, 6, 7
the new stdDevIndexSet now includes indexes 4,6,7
12 was not included, which is great cause I thought was throwing everything off
now with this new "stdDevIndexSet" I check it against the original IndexSet
If the stdDevIndexSet count is less, then I reload this new indexSet into
the whole process and calculate everything again.
if ([stdDevIndexSet count] < dupeIndexesCount)
{
[self loadDupesIndexSet:stdDevIndexSet];
}
else {
doneTrim = YES;
}
So it is different, so I start the whole process again with index set that
includes 4,6,7
updated calculations
dupeIndexsMean = 5.6667;
populationSD = 1.2472;
populationVariance = 1.5556;
sampleSD = 1.5275;
sampleVariance = 2.3333;
stdDevRangeStart = 4;
stdDevRangeEnd = 7;
The newTrimmed IndexSet now "fits" the Stand Deviation Range.
So if I use the new Mean rounded to 6.
My Best Index to use is 6 from the original (4, 6, 7, 12)
Which now makes send to me.
So big question am I approaching this correctly?
Do things like the original Size (length) of the "potential" range matter?
IE if the original playlist length was 20 tracks as compared to 40 tracks?
(I'm thinking not).
I'm starting to play around with some C code within Objective-C programs. The function I'm trying to write sorts all of the lat/long coordinates from a KML file into clusters based on 2D arrays.
I'm using three 2D arrays to accomplish this:
NSUInteger **bucketCounts refers to the number of CLLocationCoordinate2Ds in a cluster.
CLLocationCoorindate2D **coordBuckets is an array of arrays of coordinates
NSUInteger **bucketPointers refers to an index in the array of coordinates from coordBuckets
Here's the code that is messing me up:
//Initialize C arrays and indexes
int n = 10;
bucketCounts = (NSUInteger**)malloc(sizeof(NSUInteger*)*n);
bucketPointers = (NSUInteger**)malloc(sizeof(NSUInteger*)*n);
coordBuckets = (CLLocationCoordinate2D **)malloc(sizeof(CLLocationCoordinate2D*)*n);
for (int i = 0; i < n; i++) {
bucketPointers[i] = malloc(sizeof(NSUInteger)*n);
bucketCounts[i] = malloc(sizeof(NSUInteger)*n);
}
NSUInteger nextEmptyBucketIndex = 0;
int bucketMax = 500;
Then for each CLLocationCoordinate2D that needs to be added:
//find location to enter point in matrix
int latIndex = (int)(n * (oneCoord.latitude - minLat)/(maxLat-minLat));
int lonIndex = (int)(n * (oneCoord.longitude - minLon)/(maxLon-minLon));
//see if necessary bucket exists yet. If not, create it.
NSUInteger positionInBucket = bucketCounts[latIndex][lonIndex];
if (positionInBucket == 0) {
coordBuckets[nextEmptyBucketIndex] = malloc(sizeof(CLLocationCoordinate2D) * bucketMax);
bucketPointers[latIndex][lonIndex] = nextEmptyBucketIndex;
nextEmptyBucketIndex++;
}
//Insert point in bucket.
NSUInteger bucketIndex = bucketPointers[latIndex][lonIndex];
CLLocationCoordinate2D *bucketForInsert = coordBuckets[bucketIndex];
bucketForInsert[positionInBucket] = oneCoord;
bucketCounts[latIndex][lonIndex]++;
positionInBucket++;
//If bucket is full, expand it.
if (positionInBucket % bucketMax == 0) {
coordBuckets[bucketIndex] = realloc(coordBuckets[bucketIndex], (sizeof(CLLocationCoordinate2D) * (positionInBucket + bucketMax)));
}
Things seem to be going well for about 800 coordinates, but at the same point a value in either bucketCounts or bucketPointers gets set to an impossibly high number, which causes a reference to a bad value and crashes the program. I'm sure this is a memory management issue, but I don't know C well enough to troubleshoot it myself. Any helpful pointers for where I'm going wrong? Thanks!
It seems to me each entry in bucketPointers can potentially have its own "bucket", requiring a unique element of coordBuckets to hold the pointer to that bucket.
The entries in bucketPointers are indexed by bucketPointers[latIndex][lonIndex], so there can be n*n of them, but you allocated only n places in coordBuckets.
I think you should allocate for n*n elements in coordBuckets.
Two problems I see:
You don't initialize bucketCounts[] in the given code. It may well happen to all 0s but you should still initialize it with calloc() or memset():
bucketCounts[i] = calloc(n, sizeof(NSUInteger));
if oneCoord.latitude == maxLat then latIndex == n which will overflow your arrays which have valid indexes from 0 to n-1. Same issue with lonIndex. Either allocate n+1 elements and/or make sure latIndex and lonIndex are clamped from 0 to n-1.
In code using raw arrays like this you can solve a lot of issues with two simple rules:
Initialize all arrays (even if you technically don't need to).
Check/verify all array indexes to prevent out-of-bounds accesses.
I'm writing an Objective-C algorithm that compares two images and outputs the differences.
Occasionally two identical images will be passed in. Is there a way to tell immediately from the resulting CGImageRef that it contains no data? (i.e. only transparent pixels).
The algorithm is running at > 20 fps so performance is a top priority.
You should go with CoreImage here.
Have a look at the "CIArea*" filters.
See Core Image Filter reference here: http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Reference/CoreImageFilterReference/Reference/reference.html
This will be A LOT faster than any of the previous approaches.
Let us know if this works for you.
From a performance perspective, you should incorporate this check into your comparison-algorithm. The most expensive operation when working on images is most of the time loading a small bit of the image into cache. Once you got it there, there are plenty of ways of working on the data really fast (SIMD), but the problem is that you need to evict and reload the cache with new data all the time, and this is computationally expensive. Now, if you already have been through every pixel of both images once in your algorithm, it would make sense to at the same time compute the SAD while you still got the data in cache. So in pseudo-code:
int total_sad = 0
for y = 0; y < heigth; y++
for x = 0; x < width; x+=16
xmm0 = load_data (image0 + y * width + x)
xmm1 = load_data (image1 + y * width + x)
/* this stores the differences (your algorithm) */
store_data (result_image + y * width + x, diff (xmm0, xmm1))
/* this does the SAD at the same time */
total_sad += sad (xmm0, xmm1)
if (total_sad == 0)
print "the images are identical!"
Hope that helps.
Not sure about this but if you can have a sample image of completly blank image already exists then,
UIImage *image = [UIImage imageWithCGImage:imgRef]; //imgRef is your CGImageRef
if(blankImageData == nil)
{
UIImage *blankImage = [UIImage imageNamed:#"BlankImage.png"];
blankImageData = UIImagePNGRepresentation(blankImage); //blankImageData some global for cache
}
// Now comparison
imageData = UIImagePNGRepresentation(image);// Image from CGImageRef
if([imageData isEqualToData:blankImageData])
{
// Your image is blank
}
else
{
// There are some colourful pixel :)
}
While doing image processing with UI images in Objective-C, i am having an unusual image-skewed problem for some images. The images are getting distorted badly. This problem occurs in Open-CV when i take width instead of width-step.
I know that this problem is created for images whose width is not a multiple of 4 (improper row allignment), my question is how do i fix the issue in Objective-C ??
My sample code:-
unsigned char *output_image1 = (unsigned char *)malloc(height2*width2*4);
int i,j;
for(i=0;i<height2;i++){
for(j=0;j<4*width2;j=j+4){
output_image[i*4*width2+4*(j/4)] = 46;
output_image[i*4*width2+4*(j/4)+1] = 100;
output_image[i*4*width2+4*(j/4)+2] = 150;
output_image[i*4*width2+4*(j/4)+3] = 255;
}
}
int first[] = {1, 4};
int second[] = {2, 3, 7};
arrayOfCPointers[0] = first;
arrayOfCPointers[1] = second;
NSLog(#"size of %lu", sizeof(arrayOfCPointers[0]) / sizeof(int));
I want to have an array of sub arrays. Each sub array needs to be a different size. But I need to be able to find out the size of each sub array?
The Log keeps returning 1
You need to store the size somewhere. The language does not do so for bare C arrays. All you have is the address of the first element.
I'd write a wrapper class or struct to hold the array and it's metadata (like length).
typedef struct tag_arrayholder
{
int* pArray;
int iLen;
}ArrayHolder;
int first[] = {1, 4};
ArrayHolder holderFirst;
holderFirst.pArray = first;
holderFirst.iArrayLen = sizeof(first) / sizeof(int);
arrayOfCPointers[0] = holderFirst;
NSLog(#"size of %lu", arrayOfCPointers[0].iLen);
Or, like trojanfoe said, store special value marking the last position (exactly the approach zero-terminated string uses)
The "sizeof" instruction could be used to know the amount of bytes used by the array, but it works only with static array, with dynamics one it returns the pointer size. So with static array you could use this formula : sizeof(tab)/sizeof(tab[0]) to know the size of your array because the first part give you the tab size in bytes and the second the size of an element, so the result is your amount of element in your array ! But with a dynamic array the only way is to store the size somewhere or place a "sentinal value" at the end of your array and write a loop which count elements for you !
(Sorry for my English i'm french :/)
The NSLog statement is printing the value 1 because the expression you're using is dividing the size of the first element of the array (which is the size of an int) by the size of an int.
So what you currently have is this:
NSLog(#"size of %lu", sizeof(arrayOfCPointers[0]) / sizeof(int));
If you remove the array brackets, you'll get the value you're looking for:
NSLog(#"size of %lu", sizeof(arrayOfCPointers) / sizeof(int));
As other answers have pointed out, this won't work if you pass the array to another method or function, since all that's passed in that case is an address. The only reason the above works is because the array's definition is in the local scope, so the compiler can use the type information to compute the size.