Drawing an Isometric image split into layers - layer

I've seen quite a bit of questions regarding how to draw isometric tiles, and most all point at being drawn back to front, top down. However I'm trying to find a way to prevent clipping with a single isometric image.
While normally drawing a sprite ontop of a single image would not prevent overdrawing on walls and such, I split up the image into 3 layers. A floor, lower wall, and top wall. Where the player checks the floor for collision, is drawn in front of the lower wall always, and drawn behind the top wall always. The result looks like the following
While this seems to work decently well, I'd like to know what the most efficient way to draw these sort of isometric scenes are. I've considered tiles, however that raises the question of how to draw multi-tiled buildings and such. If tiling becomes a better option I will create a new question regarding those questions. For now lets assume I'm using a single image broken into layers.
This is somewhat easier, however, for my artist. To be able to draw a single scene in isometric, and split it up into layers, eliminating the need for a map creator. And then using pixel collision to get precise collision with the enviroment.
Is using a multi-layered scene even a good approach for this? My biggest concern is preventing overdrawing and breaking perspective. I've also seen many good examples of drawing everything using tiles, however then I'm limited to a certain scale, and that arises even more questions. Do you know of the best way to approach this? Should I use tiles instead of a single image split into layers?
I plan to code this in either MonoGame or Processing.
(I would have posted this on gamedev but I can not post images there)

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

Detect multiple bodies in Kinect?

I am working with kinect in openframework using the ofxKinect addon, which is great and plenty fun!
Anyway I am looking for some pointers or a direction when dealing with multiple bodies on the screen. I was thinking of making a rect around each detected body and when the rects intersect something could happen, an effect or anything.
So what I am looking for are ideas or something that could point me to the right direction of detecting multiple bodies when using a kinect.
Right now based on the depth image I get from the kinect I go through each pixel and create a bunch of smaller rectangles with a padding and group them in a larger rectangle bound if they are separate from another rectangle group. This is not ideal as it only deals with the pixel values and is not really seperating bodies from eachother and is not giving me the results I am looking for.
So any ideas would be greatly appreciated!
If you want to use ofxKinect a quick solution would be to threshold on depth and assume bodies and no other objects will be within a depth range. This should make it easy to use the OpenCV's contour finder to isolate the outlines of the bodies and get the bounding rectangles. If the rectangles intersect(and ofRectangle already does the math you), trigger the reaction you need. Also don't forget to do that once if the effect isn't showing already, otherwise you will trigger the effect multiple times per second while the two bodies' bounding rectangles intersect.
You could try something a bit more hardcore and using ofxCv(not just ofxOpenCV) to tap into the HoG functionality. This is slow in itself and not ideal with the depth map, but hopefully you can run in every few seconds just to detect a person and the depth, then keep tracking that movement.
Personally, if you want to track people with the Kinect I recommend using ofxOpenNI as if already provides the scene segmentation feature and even if you don't track the skeletons you can still get useful information like the pixels pertaining to each body and they're centre of mass. I'm guessing Microsoft KinectSDK has a similar feature and there should be an oF addon, but it's windows only.
ofxKinect/libfreenect does not offer any people detection features, so you will need to roll your own.

How to render a BSP Tree without splitting

I have a BSP tree for depth sorting in an isometric game (I've tried countless other methods) it's seems to be close, but in my game I cannot split the assets. So, items that intercept the current plane, I simply add them to both the "behind" and "ahead" nodes (as suggested in http://www.seas.upenn.edu/~cis568/presentations/bsp-techniques.pdf).
When I traverse the tree (from lowest depth to highest), I only render the sprite once (the first time I approach it) but this seems to be placing some sprites too low in the display order.
Any insight on this would be greatly appreciated. This is in (mostly) C for iOS, btw.
Thanks in advance (and I try to answer some questions on here, but y'all are so darn fast!).
I don't think a BSP tree will help you sort out the (literal) sprite sorting issues prevalent in isometric tile rendering. Especially if you have archways like doors, there will always be a point where the character just pops under/above the archway. BSP trees help sort visible areas in a 3D world quickly, but an isometric map is a 2 dimensional, layered view where other effects like size of individual tile images play a larger factor than position in space.
The most common solution is in fact to split the assets in order to render individual parts of an archway (or any larger isometric object) either in front or behind the player, depending on each part's distance to the camera.
The alternative is to add blocking so that the player and other objects can never get too close to areas where sprite rendering normally fails due to sorting issues. But that only works if it is built-in from the start (size of tiles, size of world collisions).

Efficiently drawing and animating thousands of rectangles

What I am trying to do is to have many small rectangles on the screen (up to several thousand) which move randomly.
I have the mechanics behind this figured out (in terms of determining the coordinates for the movement), but I can't figure out the best way to draw the shapes or model their movement.
A couple strategies I have tried have been, first, to subclass NSView (this is on the Mac) and create thousands of these. I then change their drawRect: function in order to draw a square inside of themselves. Then it is pretty simple to just change their locations to move them around. However, with several thousand allocated instances of these, performance is obviously terrible.
I tried a less object-oriented route also, just using NSRectFill to draw the thousands of rectangles. However, I had trouble implementing the movement I needed with this, though it was blazing fast.
Does anyone have any suggestions on how I could successfully create this animation?
Layers and Core Animation are the best approach for the platform.
Several thousand rectangles may be too much for CoreAnimation. You should consider using OpenGL.

SpriteSheet, AtlasSprite, Sprite and optimization

I'm developing an iPhone Cocos2D game and reading about optimization. some say use spritesheet whenever possible. others say use atlassprite whenever possible and others say sprite is fine.
I don't get the "whenever possible", when each one can and can't be used?
Also what is the best case for each type?
My game will typically use 100 sprites in a grid, with about 5 types of sprites and some other single sprites. What is the best setup for that? guidelines for deciding for general cases will help too.
Here's what you need to know about spritesheets vs. sprites, generally.
A spritesheet is just a bunch of images put together onto one big image, and then there will be a separate file for image location data (i.e. image 1 starts at coordinate 0,0 with a size of 100,100, image 2 starts at coordinate 100,0, etc).
The advantage here is that loading textures (sprites) is a pretty I/O and memory-alloc intensive operation. If you're trying to do this continually in your game, you may get lags.
The second advantage is memory optimization. If you're using transparent PNGs for your images, there may be a lot of blank pixels -- and you can remove those and "pack" your texture sizes way down than if you used individual images. Good for both space & memory concerns. (TexturePacker is the tool I use for the latter).
So, generally, I'd say it's always a good idea to use a sprite sheet, unless you have non-transparent sprites.