I'm writing an Objective-C algorithm that compares two images and outputs the differences.
Occasionally two identical images will be passed in. Is there a way to tell immediately from the resulting CGImageRef that it contains no data? (i.e. only transparent pixels).
The algorithm is running at > 20 fps so performance is a top priority.
You should go with CoreImage here.
Have a look at the "CIArea*" filters.
See Core Image Filter reference here: http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Reference/CoreImageFilterReference/Reference/reference.html
This will be A LOT faster than any of the previous approaches.
Let us know if this works for you.
From a performance perspective, you should incorporate this check into your comparison-algorithm. The most expensive operation when working on images is most of the time loading a small bit of the image into cache. Once you got it there, there are plenty of ways of working on the data really fast (SIMD), but the problem is that you need to evict and reload the cache with new data all the time, and this is computationally expensive. Now, if you already have been through every pixel of both images once in your algorithm, it would make sense to at the same time compute the SAD while you still got the data in cache. So in pseudo-code:
int total_sad = 0
for y = 0; y < heigth; y++
for x = 0; x < width; x+=16
xmm0 = load_data (image0 + y * width + x)
xmm1 = load_data (image1 + y * width + x)
/* this stores the differences (your algorithm) */
store_data (result_image + y * width + x, diff (xmm0, xmm1))
/* this does the SAD at the same time */
total_sad += sad (xmm0, xmm1)
if (total_sad == 0)
print "the images are identical!"
Hope that helps.
Not sure about this but if you can have a sample image of completly blank image already exists then,
UIImage *image = [UIImage imageWithCGImage:imgRef]; //imgRef is your CGImageRef
if(blankImageData == nil)
{
UIImage *blankImage = [UIImage imageNamed:#"BlankImage.png"];
blankImageData = UIImagePNGRepresentation(blankImage); //blankImageData some global for cache
}
// Now comparison
imageData = UIImagePNGRepresentation(image);// Image from CGImageRef
if([imageData isEqualToData:blankImageData])
{
// Your image is blank
}
else
{
// There are some colourful pixel :)
}
Related
I have A* pathfinding implemented in my 2D game and it works well on a plain map with obstacles. Now I'm trying to understand how to modify the algorithm, so it counts rough terrain (hills, forest, etc) as 2 moves instead of 1.
With the 1 movement cost, the algorithm uses integers 10 and 14 in the move cost function. Im interested in how to modify these values if one cell actually has a movement cost of 2? will it be 20:17?
Here's how my current algorithm currently computes G and H (adopted from Ray Wenderleich):
// Compute the H score from a position to another (from the current position to the final desired position
- (int)computeHScoreFromCoord:(CGPoint)fromCoord toCoord:(CGPoint)toCoord
{
// Here we use the Manhattan method, which calculates the total number of step moved horizontally and vertically to reach the
// final desired step from the current step, ignoring any obstacles that may be in the way
return abs(toCoord.x - fromCoord.x) + abs(toCoord.y - fromCoord.y);
}
// Compute the cost of moving from a step to an adjecent one
- (int)costToMoveFromStep:(ShortestPathStep *)fromStep toAdjacentStep:(ShortestPathStep *)toStep
{
return ((fromStep.position.x != toStep.position.x)
&& (fromStep.position.y != toStep.position.y))
? 14 : 10;
}
If some of the edges have movement cost 2, you will simply add 2 to the G of the parent node, rather than 1.
As for H: it doesn't need to change. The resulting heuristic will still be admissible/consistent.
I think I got it, with this line the tutorial author checks if the move is 1 square or 2 squares(diagonal) from the move that is currently being considered.
return ((fromStep.position.x != toStep.position.x)
&& (fromStep.position.y != toStep.position.y))
? 14 : 10;
Unfortunately, this is a really simple case and does not really explain what has to be done. Number 10 is used to make calculations easier (10 = 1 move cost), and (14 = 1 diagonal move) is an approximation of sqrt(10*10).
I attempted to introduce terrain cost below, and this requires extra information - I need to know which cell I'm going through to reach the destination. This turned out to be really annoying, and the code below is clearly not my best, but I attempted to spell out what's going on at each step.
If I'm making a diagonal move, I need to know it's move cost AND the move cost of 2 squares that can be used to get there. I can then pick the lowest movement cost among two squares and plug it into the equation of the form:
moveCost = (int)sqrt(lowestMoveCost*lowestMoveCost + (stepNode.moveCost*10) * (stepNode.moveCost*10));
Here's the entire loop that checks adjacent steps and creates new steps out of them with the move cost. It finds tile in my map array and returns it's terrain cost.
NSArray *adjSteps = [self walkableAdjacentTilesCoordForTileCoord:currentStep.position];
for (NSValue *v in adjSteps) {
ShortestPathStep *step = [[ShortestPathStep alloc] initWithPosition:[v CGPointValue]];
// Check if the step isn't already in the closed set
if ([self.spClosedSteps containsObject:step]) {
continue; // Ignore it
}
tileIndex = [MapOfTiles tileIndexForCoordinate:step.position];
DLog(#"point (x%.0f y%.0f):%i",step.position.x,step.position.y,tileIndex);
stepNode = [[MapOfTiles sharedInstance] mapTiles] [tileIndex];
// int moveCost = [self costToMoveFromStep:currentStep toAdjacentStep:step];
//in my case 0,0 is bottom left, y points up x points right
if((currentStep.position.x != step.position.x) && (currentStep.position.y != step.position.y))
{
//move one step away - easy, multiply move cost by 10
moveCost = stepNode.moveCost*10;
}else
{
possibleMove1 = 0;
possibleMove2 = 0;
//we are moving diagonally, figure out in which direction
if(step.position.y > currentStep.position.y)
{
//moving up
possibleMove1 = tileIndex + 1;
if(step.position.x > currentStep.position.x)
{
//moving right and up
possibleMove2 = tileIndex + tileCountTall;
}else
{
//moving left and up
possibleMove2 = tileIndex - tileCountTall;
}
}else
{
//moving down
possibleMove1 = tileIndex - 1;
if(step.position.x > currentStep.position.x)
{
//moving right and down
possibleMove2 = tileIndex + tileCountTall;
}else
{
//moving left and down
possibleMove2 = tileIndex - tileCountTall;
}
}
moveNode1 = nil;
moveNode2 = nil;
CGPoint coordinate1 = [MapOfTiles tileCoordForIndex:possibleMove1];
CGPoint coordinate2 = [MapOfTiles tileCoordForIndex:possibleMove2];
if([adjSteps containsObject:[NSValue valueWithCGPoint:coordinate1]])
{
//we know that possible move to reach destination has been deemed walkable, get it's move cost from the map
moveNode1 = [[MapOfTiles sharedInstance] mapTiles] [possibleMove1];
}
if([adjSteps containsObject:[NSValue valueWithCGPoint:coordinate2]])
{
//we know that the second possible move is walkable
moveNode2 = [[MapOfTiles sharedInstance] mapTiles] [possibleMove2];
}
#warning not sure about this one if the algorithm has to backtrack really far back
//find out which square has the lowest move cost
lowestMoveCost = fminf(moveNode1.moveCost, moveNode2.moveCost) * 10;
moveCost = (int)sqrt(lowestMoveCost*lowestMoveCost + (stepNode.moveCost*10) * (stepNode.moveCost*10));
}
// Compute the cost form the current step to that step
// Check if the step is already in the open list
NSUInteger index = [self.spOpenSteps indexOfObject:step];
if (index == NSNotFound) { // Not on the open list, so add it
// Set the current step as the parent
step.parent = currentStep;
// The G score is equal to the parent G score + the cost to move from the parent to it
step.gScore = currentStep.gScore + moveCost;
// Compute the H score which is the estimated movement cost to move from that step to the desired tile coordinate
step.hScore = [self computeHScoreFromCoord:step.position toCoord:toTileCoord];
// Adding it with the function which is preserving the list ordered by F score
[self insertInOpenSteps:step];
}
else { // Already in the open list
step = (self.spOpenSteps)[index]; // To retrieve the old one (which has its scores already computed ;-)
// Check to see if the G score for that step is lower if we use the current step to get there
if ((currentStep.gScore + moveCost) < step.gScore) {
// The G score is equal to the parent G score + the cost to move from the parent to it
step.gScore = currentStep.gScore + moveCost;
// Because the G Score has changed, the F score may have changed too
// So to keep the open list ordered we have to remove the step, and re-insert it with
// the insert function which is preserving the list ordered by F score
// Now we can removing it from the list without be afraid that it can be released
[self.spOpenSteps removeObjectAtIndex:index];
// Re-insert it with the function which is preserving the list ordered by F score
[self insertInOpenSteps:step];
}
}
}
These types of problems are quite common in, say, chip routing and, yes, gamedev.
Standard approach is to have your graph (in C++ I would say you have Boost "grid graph" or similar structure). If you can afford to have an object each vertex, then the solution is quite easy.
You connect two vertices (neighbors or diagonally adjacent) by an edge, unless there is an obstacle between them. You assign this edge a weight equal to edge length (10 or 14) times terrain cost. Sometimes people prefer not to exclude obstacle edges but assign extremely high weights to them (an advantage is that with such approach you are guaranteed to find at least some path, even when object is stuck at an island).
Then you apply A* algorithm. Your heuristic function (H) can be "pessimistic" (equal to Euclidean distance times the max move cost) or "optimistic" (Euclidean distance times min move cost) or anything in between. Different heuristics will result in slightly different "personalities" of your search but usually do not matter much.
I'm doing some audio programming for a client and I've come across an issue which I just don't understand.
I have a render callback which is called repeatedly by CoreAudio. Inside this callback I have the following:
// Get the audio sample data
AudioSampleType *outA = (AudioSampleType *)ioData->mBuffers[0].mData;
Float32 data;
// Loop over the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++) {
// Convert from SInt16 to Float32 just to prove it's possible
data = (Float32) outA[frame] / (Float32) 32768;
// Convert back to SInt16 to show that everything works as expected
outA[frame] = (SInt16) round(next * 32768);
}
This works as expected which shows there aren't any unexpected rounding errors.
The next thing I want to do is add a small delay. I add a global variable to the class:
i.e. below the #implementation line
Float32 last = 0;
Then I use this variable to get a one frame delay:
// Get the audio sample data
AudioSampleType *outA = (AudioSampleType *)ioData->mBuffers[0].mData;
Float32 data;
Float32 next;
// Loop over the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++) {
// Convert from SInt16 to Float32 just to prove it's possible
data = (Float32) outA[frame] / (Float32) 32768;
next = last;
last = data;
// Convert back to SInt16 to show that everything works as expected
outA[frame] = (SInt16) round(next * 32768);
}
This time round there's a strange audio distortion on the signal.
I just can't see what I'm doing wrong! Any advice would be greatly appreciated.
It seems that what you've done is introduced an unintentional phaser effect on your audio.
This is because you're only delaying one channel of your audio, so the result is that you have the left channel being delayed one frame behind the right channel. This would result in some odd frequency cancellations / amplifications that would suit your description of "a strange audio distortion".
Try applying the effect to both channels:
AudioSampleType *outA = (AudioSampleType *)ioData->mBuffers[0].mData;
AudioSampleType *outB = (AudioSampleType *)ioData->mBuffers[1].mData;
// apply the same effect to outB as you did to outA
This assumes that you are working with stereo audio (i.e ioData->mNumberBuffers == 2)
As a matter of style, it's (IMO) a bad idea to use a global like your last variable in a render callback. Use the inRefCon to pass in proper context (either as a single variable or as a struct if necessary). This likely isn't related to the problem you're having, though.
I am testing a background-loop animation where there will be to images both 1024x768 pixels in dimension, move leftwards, go offscreen, then jump back to the other side, and repeat.
I was able to do this by creating a constant speed for both background image to move (successful), and then I tried the following code to make it jump, but there was a problem:
if((background.center.x) < -511){
background.center = CGPointMake(1536, background.center.y);
}
if((background2.center.x) < -511){
background2.center = CGPointMake(1536, background2.center.y);
}
Somehow this is not working the way I expected. It leaves a few pixels of gap every time, and I am confused why. Does anyone know what's causing this to happen and how to fix it? Thanks!
It seems like you have forgotten to take into account the distance moved. The greater than expression might have been triggered because you moved to far. I guess your movement is larger than 1 pixel/frame.
I am not sure what kind of values that are feeding your movement but I think to take into account the movement you should do something like...
if ((background.center.x) < -511){
CGFloat dist = background.center.x + 512;
background.center = CGPointMake(1536+dist, background.center.y);
}
if ((background2.center.x) < -511){
CGFloat dist = background2.center.x + 512;
background2.center = CGPointMake(1536+dist, background2.center.y);
}
Rather than have the two images move (sort of) independently, I would keep track of a single backgroundPosition variable and then constantly update the position of both images relative to that one position. This should keep everything nice and tidy:
CGFloat const backgroundWidth = 1024;
CGFloat const backgroundSpeed = 2;
- (void)animateBackground {
backgroundPosition -= backgroundSpeed;
if (backgroundPosition < 0) {
backgroundPosition += backgroundWidth;
}
background1.center.x = backgroundPosition - backgroundWidth/2;
background2.center.x = backgroundPosition + backgroundWidth/2;
}
I'm working on a drawing app for iPad using Cocos-iOS and I'm having performance issues with drawing lines as a type of CCNode. I understand that using draw in a node causes it to be called every time the canvas is repainted and the current code is very heavy if used every time:
for (LineNodePoint *point in self.points) {
start = end;
end = point;
if (start && end) {
float distance = ccpDistance(start.point, end.point);
if (distance > 1) {
int d = (int)distance;
float difx = end.point.x - start.point.x;
float dify = end.point.y - start.point.y;
for (int i = 0; i < d; i++) {
float delta = i / distance;
[[self.brush sprite] setPosition:ccp(start.point.x + (difx * delta), start.point.y + (dify * delta))];
[[self.brush sprite] visit];
}
}
}
}
Very heavy...
I either need a better way to draw the lines or to be able to cache the drawing as a raster.
Thanks in advance for any help.
How about ccDrawLine or CCMutableTexture? CCMutableTexture is for manipulating pixels using CCRenderTexture internally as you said.
ccDrawLine
cocos2d for iPhone 1.0.0 API reference
CCMutableTexture
Fast set/getPixel for an opengl texture?
[render texture] pixel manipulation (integrated CCMutableTexture functionality)
I want to be able to make image move realistically with the accelerometer controlling it, like any labyrinth game. Below shows what I have so far but it seems very jittery and isnt realistic at all. The ball images seems to never be able to stop and does lots of jittery movements around everywhere.
- (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration {
deviceTilt.x = 0.01 * deviceTilt.x + (1.0 - 0.01) * acceleration.x;
deviceTilt.y = 0.01 * deviceTilt.y + (1.0 - 0.01) * acceleration.y;
}
-(void)onTimer {
ballImage.center = CGPointMake(ballImage.center.x + (deviceTilt.x * 50), ballImage.center.y + (deviceTilt.y * 50));
if (ballImage.center.x > 279) {
ballImage.center = CGPointMake(279, ballImage.center.y);
}
if (ballImage.center.x < 42) {
ballImage.center = CGPointMake(42, ballImage.center.y);
}
if (ballImage.center.y > 419) {
ballImage.center = CGPointMake(ballImage.center.x, 419);
}
if (ballImage.center.y < 181) {
ballImage.center = CGPointMake(ballImage.center.x, 181);
}
Is there some reason why you can not use the smoothing filter provided in response to your previous question: How do you use a moving average to filter out accelerometer values in iPhone OS ?
You need to calculate the running average of the values. To do this you need to store the last n values in an array, and then push and pop values off the array when ever you read the accelerometer data. Here is some pseudocode:
const SIZE = 10;
float[] xVals = new float[SIZE];
float xAvg = 0;
function runAverage(float newX){
xAvg += newX/SIZE;
xVals.push(newX);
if(xVals.length > SIZE){
xAvg -= xVals.pop()/SIZE;
}
}
You need to do this for all three axis. Play around with the value of SIZE; the larger it is, the smoother the value, but the slower things will seem to respond. It really depends on how often you read the accelerometer value. If it is read 10 times per second, then SIZE = 10 might be too large.