Reliable access and modify captured camera frames under SceneKit - objective-c

I try to add a B&W filter to the camera images of an ARSCNView, then render colored AR objects over it.
I'am almost there with the following code added to the beginning of - (void)renderer:(id<SCNSceneRenderer>)aRenderer updateAtTime:(NSTimeInterval)time
CVPixelBufferRef bg=self.sceneView.session.currentFrame.capturedImage;
if(bg){
char* k1 = CVPixelBufferGetBaseAddressOfPlane(bg, 1);
if(k1){
size_t x1 = CVPixelBufferGetWidthOfPlane(bg, 1);
size_t y1 = CVPixelBufferGetHeightOfPlane(bg, 1);
memset(k1, 128, x1*y1*2);
}
}
This works really fast on mobile, but here's the thing: sometimes a colored frame is displayed.
I've checked and my filtering code is executed but I assume it's too late, SceneKit's pipeline already processed camera input.
Calling the code earlier would help, but updateAtTime is the earliest point one can add custom frame by frame code.
Getting notifications on frame captures might help, but looks like the whole AVCapturesession is unaccessible.
The Metal ARKit example shows how to convert the camera image to RGB and that is the place where I would do filtering, but that shader is hidden when using SceneKit.
I've tried this possible answer but it's way too slow.
So how can I overcome the frame misses and convert the camera feed reliably to BW?

Here's the key for this problem:
session:didUpdateFrame:
Provides a newly captured camera image and accompanying AR information to the delegate.
So just moved CVPixelBufferRef manipulation, the image filtering code from
- (void)renderer:(id<SCNSceneRenderer>)aRenderer updateAtTime:(NSTimeInterval)time
to
- (void)session:(ARSession *)session didUpdateFrame:(ARFrame *)frame
Made sure to set self.sceneView.session.delegate = self to have this delegate called.

Related

looping a sprite vertically objective C sprite builder

Note: for this I am using a program called spritebuilder, which allows me to create a game with less code than would normally be needed. If you know a solution that's just all code, then by all means feel free to share it :)
Also, for this question, I followed a tutorial at this link: Build Your Own Flappy Bird Clone. Just scroll down to the part that says: "Loop the Ground"
So here's my problem. Im currently working on a game, and I created a camera which scrolls vertically long with the character sprite i created, however i need a certain image to loop. When the image leaves the bottom part of the screen I would like it to loop around to the top of the screen, infinitely. For this i created two identical images (in this case its the bark of a tree). One will be on screen, while the other will be offscreen, so as the first image leaves the screen, the second will replace it (seamlessly). I created two objects for the images, and assigned them the name _ground1, and _ground2, and I also created an NSArray in which to store them in. (Please refer to the link above if it is somewhat confusing)
Here is the code that I have:
CCNode *_ground1;
CCNode *_ground2;
NSArray *_grounds;
for (CCNode *ground in _grounds) {
// get the world position of the ground
CGPoint groundWorldPosition = [_physicsNode convertToWorldSpace:ground.position];
// get the screen position of the ground
CGPoint groundScreenPosition = [self convertToNodeSpace:groundWorldPosition];
// if the left corner is one complete width off the screen, move it to the right
if (groundScreenPosition.y <(-1 * ground.contentSize.height)) {
ground.position = ccp(ground.position.x , ground.position.y + 2 * ground.contentSize.height);
}
For some reason when I try this, it doesnt seem to work. what happens is that, the camera will travel vertically as it is meant to do, but the images do not loop. Once the two images leave the bottom of the screen, no new images replace them.
i also done this project as above tutorials. it work fine but you have some mistake to set variable in spritebuilder. in your above code replce code as and try it. you only put less than may be it issue.
if (groundScreenPosition.y <=(-1 * ground.contentSize.height)) {
ground.position = ccp(ground.position.x , ground.position.y + 2 * ground.contentSize.height);
}
You are using CCNode objects as _ground1and _ground2.
CCNode objects usually do not have a contentSize, they will return 0 unless you explicitly set them inSpriteBuilder`.
Make sure that you are using CCSprite objects in SpriteBuilder and in your code.
Also, as a friendly hint you should also consider refactoring (renaming) your sprites with more meaningful names for your use case like _treeBark1 and treeBark2 for example.

Dynamic resizing of the body (LibGDX)

I have a circle-shaped dynamic body and I need to resize it during the game (It appears like a point, then it grows to a circle and after that it starts moving). How should I do that?
I have an idea - it's to use some animation (Circle has the same radius, but due to animation it looks like the circle grows), but I'm not sure if it's right way or not. (Besides I don't know how to realize it)
For scaling circle, if you are using sprite just scale it sprite.setScale(float), if your sprite is attached to Box2d Circle-shape then get the Body's shape and set the radius
Shape shape = body.getFixture().getShape;
shape.setRadius(radiusValue);
and if you are using ShapeRenderer just multiply the points of ShapeRenderer.
I assume that you are talking about a Box2D body.
It is not possible to change a circle-shaped fixture with Box2D. Box2D is a rigid body simulator. What you would have to do is destroy the fixture and replace it with a smaller/bigger version of the circle. But this will cause a lot of problems, since you cannot destroy a fixture when there is still a contact for example.
It would be better to keep the circle the same size and just simulate a change in size with an animation of a texture on top.
If you cannot simulate that, then maybe try the following approach: Have several versions of that circle in different sizes and keep them on top of each other. Implement a ContactFilter which will only cause contacts for the one circle which is currently "active".
Inside any Object class with box2d, I use the following for dynamic resizing:
public void resize(float newradius) {
this.body.destroyFixture(this.fixture);
fixtureDef.density = (float) (this.mass/(Math.PI*newradius*newradius));
this.radius = newradius;
CircleShape circle = new CircleShape();
circle.setRadius(newradius);
this.fixtureDef.shape = circle;
circle.dispose();
this.fixture = body.createFixture(fixtureDef);
this.fixture.setUserData(this);
}
You can also see the following topic: How to change size after it has been created

XNA "Texture2D" amalgamation

How can I join together multiple Texture2D's into one large Texture2D? I am trying to optimize an isometric tile game by splitting the map up into chunks.
I have tried googling it, and found articles regarding "RenderTarget2D", but am unsure how to implement this.
Thanks,
Sam.
Never mind - I worked it out.
For anyone who is also looking for this, you basically draw onto a "RenderTarget2D", as you would onto the screen, using the spriteBatch.
(helpful article)
RenderTarget2D render; //declare target
render = new RenderTarget2D(GraphicsDevice, (int)(tileSize.X * numberOfTiles.X), (int)(tileSize.Y * numberOfTiles.Y), 0, SurfaceFormat.Color); //assign target, where tileSize is the size of a tile and numberOfTiles is the number of tiles you are rendering
GraphicsDevice.SetRenderTarget(0, render); //Target the render instead of the backbuffer
batch.Begin();
//draw each tile
batch.End();
GraphicsDevice.SetRenderTarget(0, null); //target the backbuffer again
Texture2D myTexture = render.GetTexture(); //store texture in Texture2D variable
Sorry for the rather poor explanation - my first try at a tutorial.

How do I analyze video stream on iOS?

For example, there are QR scanners which scan video stream in real time and get QR codes info.
I would like to check the light source from the video, if it is on or off, it is quite powerful so it is no problem.
I will probably take a video stream as input, maybe make images of it and analyze images or stream in real time for presence of light source (maybe number of pixels of certain color on the image?)
How do I approach this problem? Maybe there is some source of library?
It sounds like you are asking for information about several discreet steps. There are a multitude of ways to do each of them and if you get stuck on any individual step it would be a good idea to post a question about it individually.
1: Get video Frame
Like chaitanya.varanasi said, AVFoundation Framework is the best way of getting access to an video frame on IOS. If you want something less flexible and quicker try looking at open CV's video capture. The goal of this step is to get access to a pixel buffer from the camera. If you have trouble with this, ask about it specifically.
2: Put pixel buffer into OpenCV
This part is really easy. If you get it from openCV's video capture you are already done. If you get it from an AVFoundation you will need to put it into openCV like this
//Buffer is of type CVImageBufferRef, which is what AVFoundation should be giving you
//I assume it is BGRA or RGBA formatted, if it isn't, change CV_8UC4 to the appropriate format
CVPixelBufferLockBaseAddress( Buffer, 0 );
int bufferWidth = CVPixelBufferGetWidth(Buffer);
int bufferHeight = CVPixelBufferGetHeight(Buffer);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(Buffer);
cv::Mat image = cv::Mat(bufferHeight,bufferWidth,CV_8UC4,pixel); //put buffer in open cv, no memory copied
//Process image Here
//End processing
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
note I am assuming you plan to do this in OpenCV since you used its tag. Also I assume you can get the OpenCV framework to link to your project. If that is an issue, ask a specific question about it.
3: Process Image
This part is by far the most open ended. All you have said about your problem is that you are trying to detect a strong light source. One very quick and easy way of doing that would be to detect the mean pixel value in a greyscale image. If you get the image in colour you can convert with cvtColor. Then just call Avg on it to get the mean value. Hopefully you can tell if the light is on by how that value fluctuates.
chaitanya.varanasi suggested another option, you should check it out too.
openCV is a very large library that can do a wide wide variety of things. Without knowing more about your problem I don't know what else to tell you.
Look at the AVFoundation Framework from Apple.
Hope it helps!
You can try this method: start by getting all images to an AVCaptureVideoDataOutput. From the method:captureOutput:didOutputSampleBuffer:fromConnection,you can sample/calculate every pixel. Source: answer
Also, you can take a look at this SO question where they check if a pixel is black. If its such a powerful light source, you can take the inverse of the pixel and then determine using a set threshold for black.
The above sample code only provides access to the pixel values stored in the buffer; you cannot run any other commands but those that change those values on a pixel-by-pixel basis:
for ( uint32_t y = 0; y < height; y++ )
{
for ( uint32_t x = 0; x < width; x++ )
{
bgraImage.at<cv::Vec<uint8_t,4> >(y,x)[1] = 0;
}
}
This—to use your example—will not work with the code you provided:
cv::Mat bgraImage = cv::Mat( (int)height, (int)extendedWidth, CV_8UC4, base );
cv::Mat grey = bgraImage.clone();
cv::cvtColor(grey, grey, 44);

NSImage loading a portion of image

The requirement is like this,
I would get a single large PNG Images for a button, this single image will contain images for hOver, button clicked , mouse exit that need to be displayed,
Single PNG File size would be 1024 X 28, so each image have size about 256 X 28,
I am googling the best possible approach but couldn't make out how to achieve this,
I have following approach in mind,
NSImage *pBtnImage[MAX_BUTTON_IMAGES]
for ( i = 0; i < 4 ; i++) {
pBtnImage[i] = [[NSImage alloc]initWithData:??????];
}
I want to know what should i give in the NSData parameter,
Is it possible to load a Single Image and clipped image accordingly as and when it needed.
Thanks in advance
There's no simple Cocoa-supported way to read only a sub-rectangle of the image from its data. It's a simple matter, however, to read the whole image in and only use a select rectangle of the image when compositing. Thing is, with all the available API, you might be better off just to use the standard +[NSImage imageNamed:] method to read the images in individually and let the OS handle caching.
What actual, measured performance problem are you trying to solve? Does one really exist, or is this a case of premature optimization?