Three.js Visual indication or Effect to show when an object is occluded - optimization

I’m building a program where you control a small avatar (this is a basic circle geometry or plane) that traverses through a scene filled with 3D Models and shapes. I’d like to achieve an effect similar to those found in many video games where you can see some sort of indication that the avatar is behind the various models and shapes. For example, here is an image to explain what i mean:
Example image to show desired effect
It doesn’t necessarily need to be the outline of the shape like in the example image. I’m open to any effect really that shows some sort of indication that the avatar is behind something but also cant be too performance heavy as I'd like to get this program running on mobile. Being able to customise the effect somewhat (e.g. color, thickness, etc) is also highly desirable. Any advice or suggestions would be greatly appreciated. There really doesn't seem to be much information that I can find to achieve an effect like this.
Also I thought it was worth mentioning that thus far I have attempted two things on my own. One is just rendering the avatar above everything. That turned out to look really silly and confusing. The other thing I attempted was to use an Outline post processing effect (from this library https://github.com/vanruesc/postprocessing). Which actually looked pretty great but proved to be too performance heavy to run optimally at all times (not to mention other problems with color blending and transparent / see-through shapes and models).
I understand this is kind of a shot in the dark but thought it didn't hurt to ask.

Related

How to change aspect ratio of photos taken using AVFoundation?

I am using AVFoundation to take pictures instead of UIImagePicker due to how customizable the user interface presented to the user can be. When using it the aspect ratio that the picture is saved as is the same as the iPhone's video feed. What I want to happen is to have the pictures saved in the same aspect ratio as normal pictures are.
The way that I am currently approaching this is to overlay a black bar in the excess preview display and then just crop the photo after saving it as an image.
However, this feels very crude. I assume that it is a common thing to use the AVFoundation as a way of taking photos and so I assume I must be missing something!
I have used this example code. And I have read through the AVFoundation documentation but can only assume that I am missing a function. I have also read through similar questions to this which describe the process by which I might go about cropping images, but that isn't really my concern.
On the other hand, if there is no standard way to do this, please do let me know so that I can stop worrying that I am approaching it in a convoluted way.
Also, I am using Objective-C so if answers contain code, please could you use the same language?

Detect multiple bodies in Kinect?

I am working with kinect in openframework using the ofxKinect addon, which is great and plenty fun!
Anyway I am looking for some pointers or a direction when dealing with multiple bodies on the screen. I was thinking of making a rect around each detected body and when the rects intersect something could happen, an effect or anything.
So what I am looking for are ideas or something that could point me to the right direction of detecting multiple bodies when using a kinect.
Right now based on the depth image I get from the kinect I go through each pixel and create a bunch of smaller rectangles with a padding and group them in a larger rectangle bound if they are separate from another rectangle group. This is not ideal as it only deals with the pixel values and is not really seperating bodies from eachother and is not giving me the results I am looking for.
So any ideas would be greatly appreciated!
If you want to use ofxKinect a quick solution would be to threshold on depth and assume bodies and no other objects will be within a depth range. This should make it easy to use the OpenCV's contour finder to isolate the outlines of the bodies and get the bounding rectangles. If the rectangles intersect(and ofRectangle already does the math you), trigger the reaction you need. Also don't forget to do that once if the effect isn't showing already, otherwise you will trigger the effect multiple times per second while the two bodies' bounding rectangles intersect.
You could try something a bit more hardcore and using ofxCv(not just ofxOpenCV) to tap into the HoG functionality. This is slow in itself and not ideal with the depth map, but hopefully you can run in every few seconds just to detect a person and the depth, then keep tracking that movement.
Personally, if you want to track people with the Kinect I recommend using ofxOpenNI as if already provides the scene segmentation feature and even if you don't track the skeletons you can still get useful information like the pixels pertaining to each body and they're centre of mass. I'm guessing Microsoft KinectSDK has a similar feature and there should be an oF addon, but it's windows only.
ofxKinect/libfreenect does not offer any people detection features, so you will need to roll your own.

Drawing an Isometric image split into layers

I've seen quite a bit of questions regarding how to draw isometric tiles, and most all point at being drawn back to front, top down. However I'm trying to find a way to prevent clipping with a single isometric image.
While normally drawing a sprite ontop of a single image would not prevent overdrawing on walls and such, I split up the image into 3 layers. A floor, lower wall, and top wall. Where the player checks the floor for collision, is drawn in front of the lower wall always, and drawn behind the top wall always. The result looks like the following
While this seems to work decently well, I'd like to know what the most efficient way to draw these sort of isometric scenes are. I've considered tiles, however that raises the question of how to draw multi-tiled buildings and such. If tiling becomes a better option I will create a new question regarding those questions. For now lets assume I'm using a single image broken into layers.
This is somewhat easier, however, for my artist. To be able to draw a single scene in isometric, and split it up into layers, eliminating the need for a map creator. And then using pixel collision to get precise collision with the enviroment.
Is using a multi-layered scene even a good approach for this? My biggest concern is preventing overdrawing and breaking perspective. I've also seen many good examples of drawing everything using tiles, however then I'm limited to a certain scale, and that arises even more questions. Do you know of the best way to approach this? Should I use tiles instead of a single image split into layers?
I plan to code this in either MonoGame or Processing.
(I would have posted this on gamedev but I can not post images there)

Create mock 3D "space" with forwards and backwards navigation

In iOS, I'd like to have a series of items in "space" similar to the way Time Machine works. The "space" would be navigated by a scroll bar like feature on the side of the page. So if the person scrolls up, it would essentially zoom in in the space and objects that were further away will be closer to the reference point. If one zooms out, then those objects will fade into the back and whatever is behind the frame of refrence will come into view. Kind of like this.
I'm open to a variety of solutions. I imagine there's a relatively easy solution within openGL, I just don't know where to begin.
Check out Nick Lockwood's iCarousel on github. It's a very good component. The example code he provides uses a custom carousel style very much like what you describe. You should get there with just a few tweaks.
As you said, in OpenGL(ES) is relatively easy to accomplish what you ask, however it may not be equally easy to explain it to someone that is not confident with OpenGL :)
First of all, I may suggest you to take a look at The Red Book, the reference guide to OpenGL, or at the OpenGL Wiki.
To begin, you may do some practice using GLUT; it will help you taking confidence with OpenGL, providing some high-level API that will let you skip the boring side of setting up an OpenGL context, letting you go directly to the drawing part.
OpenGL ES is a subset of OpenGL, so essentially has the same structure. Once you understood how to use OpenGL shouldn't be so difficult to use OpenGL ES. Of course Apple documentation is a very important resource.
Now that you know a lot of stuff about OpenGL you should be able to easily understand how your program should be structured.
You may, for example, keep your view point fixed and translate the world (or viceversa). There is not (of course) a universal solution, especially because the only thing that matters is the final result.
Another solution (maybe equally good, it depends on your needs), may be to simply scale up and down images (representing the objects of your world) to simulate the movement through the object itself.
For example you may use an array to store all of your images and use a slider to set (increase/decrease) the dimension of your image. Once the image becomes too large for the display you may gradually decrease alpha, so that the image behind will slowly appear. Take a look at UIImageView reference, it contains all the API's you need for it.
This may lead you to the loss of 3-dimensionality, but it's probably a simpler/faster solution than learn OpenGL.

Manipulating / Resizing / Scaling an image in vb.net

Imagine I have a rectangle say 400px x 300px. Then let’s say I want to load an image in that. All of this is very easy using Sytem.Drawing.DrawImage.
Rectangle http://img576.imageshack.us/img576/2363/rectangle.gif
But then I want to leave the left hand side as 300px but change the right hand side to 250 px. I can draw the box using 4 DrawLines but I don’t know how to squash the image into the new shape. I want the right hand side of the shape to be 250, the left size 300 and the top and bottom 400px.
Resized http://img85.imageshack.us/img85/3479/rectangle2.gif
I can’t use DrawImage as it expects the left and right sizes to be the same. Is there a way to manipulate the image into the new shape?
I've looked at other questions, but they only apply where the left and right hand side is equal.
Any thoughts on how to squash an image into a shape which did not have parallel sides?
(If it helps, I'm happy to sacrifice image quality to fit the right shape.)
Disclaimer: I work for Atalasoft.
Our DotImage product has a command called QuadrilateralWarpCommand that can do this. It's in DotImage Photo.
What you want to do is non-trivial (but also very powerful).
#Heinzi is correct, the general class is called warp transformations. What you're trying to do is specifically a perspective transformation. At a high level, it involves running the individual coordinates through a transformation matrix to get their new positions, and then doing interpolation between pixel values based on the old and new locations.
This article talks about some transformations, one of them being a sheer, so it might be helpful overall. I'm not sure, I haven't read it closely. In general, you want to google for something approximately like "c# image transformation" or "c# image perspective transform".
Depending on what you're planning on using it for, buying a library might be the best way to go about it, although there is a lot to learn about image manipulation by doing it yourself.
I did not find a solution to your problem, but I have some information which might help you along:
What you want is called a warp transformation.
As far as I know, the .net framework natively supports this kind of transformation only for a GraphicsPath, namely, the GraphicsPath.Warp method. Unfortunately, I don't think that this will help you, unless you are willing to redraw your image using a .net GraphicsPath object.
If you need the transformation directly in the UI layer, your UI library might help: Silverlight, for example, includes the PlaneProjection class, which can be used for such effects; in WPF, the 3D engine might be useful for this (requiring more programming effort, through).