Generate isometric tiles from flat textures - iteration

Is there a simple tool (or code) to generate isometric tiles (cubes format) from using 1 or 2 (side/top) textures:
For example taking Minecraft grass top and side texture:
And generating a isometric result as:
I have a folder containing all blocks textures (top and side if needed, blocks can be top/side identicals)
I want to iterate and generate all isometrics blocks from this input, saving them as .png files, but I don't know how to join textures and generate this.
Is there an existing software, api, cli tool that I would be able to call from my iteration script ?

For a simple 3D cube like this, you can follow the ImageMagick documentation:
convert \
\( tile_top.png -alpha set -virtual-pixel transparent \
+distort Affine '0,512 0,0 0,0 -87,-50 512,512 87,-50' \) \
\( tile_left.png -alpha set -virtual-pixel transparent \
+distort Affine '512,0 0,0 0,0 -87,-50 512,512 0,100' \) \
\( tile_right.jpg -alpha set -virtual-pixel transparent \
+distort Affine ' 0,0 0,0 0,320 0,100 320,0 87,-50' \) \
\
-background none -compose plus -layers merge +repage \
-compose over isometric_cube.png

If you don't care if your source image turns into a blurry mess, then sure, use whatever 2d scaling/transform method you want. Try rotating a low res texture a non-multiple-of-90 degrees and see what happens -- it's ugly.
If you want the result to look pixel perfect, then you need to use a (decent enough) 3d renderer for the projection -- and disable antialiasing.
I'm willing to bet Blender could do it and that would be free, although I haven't tried doing it in Blender. There's probably a way to do it any 3d renderer that lets you adjust a camera completely.
You put the source square texture on a square flat 2d surface, then render it with a created camera set to orthographic (not perspective) and angled appropriately -- in your case, since the tiles look dimetric to me, rotated to either side 45 degrees and also 30 degrees down. That'll give you a pixel perfect render that you can just save to file or copy to clipboard for editing in whatever image software you want -- you'll still need to add an alpha channel to it, for example.
If you get the camera angles right, you just need to play with the camera distance a bit to get your source object+texture to fit in the render window, and the height to center it -- but it's not a lengthy or difficult process since you already know what size you want it to be and centering isn't hard either. Even with the distance too far, it'll still look mostly right -- just too small. So then you simply move the camera closer to the target object (in local coordinates so the angles don't change) and re-render and repeat as necessary. You only have to do this step once by hand, after that you load the saved scene in Blender/3ds Max/Maya/Whatever and swap the texture.
Here's a good online tutorial for doing exactly what you want in 3dsmax, but again I think it would pretty much work in any actual 3d renderer that gives you complete control over the camera:
http://www.pixelpilot.dk/isometric.htm
Note that if your tile has height above the 0 plane (your example does), then you'd have to take that into account -- might want to start with something simpler and get that working and understand it first before tackling height.
This really truly is the only way to do it right and get consistently good results. The only alternatives are: a) be a gifted artist and just hand draw your sprites b) have sprites so low in resolution that it doesn't require any noteworthy level of skill.
Otherwise, like the rest of us, you model it in 3d first and then render it to get a projection, then touching it up in an image editor by hand after the heavy lifting is done.
Hope that helps.

Related

Output from dcraw has checkerboard shading

I'm trying to use dcraw on a color image (e.g.CR or NEF) to extract raw monochrome data for image processing.
With parameters -4 -D -c I get an image with a checkerboard as shown below:
When unzoomed, the image data is correct, except for the checkboard pattern in all images from different cameras.
The above image was produced using -T and zooming in the resulting .tiff file in File Viewer Plus. In practice, I'm reading the .pgm file directly and getting the same checkboard.
What aren't I understanding? Does this have something to do with Bayer filtering?
Yes, this is due to Bayer filtering and no demosaicing. For example, Green areas will have green pixels brighter than red according to the Bayer pattern, whereas red areas will have green pixels dark.
To get some kind of correct grayscale (or color) image, intensity has to be weighed over a 2x2 area (in standard Bayer). What you are looking for cannot be achieved without the demosaicing step.
Your best bet is to extract a color image, then turn it into grayscale.

How to detect an image between shapes from camera

I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.

How to cut the png image as per the shape?

I have no experience on any image processing/editing tool. And I am doing a project, which requires me to use different shapes. I could create different shapes using visio. But however not able to get rid of white background behind. I need only shape not squared white background.Tried online out of my ways but not successfull.
Any help will be greatly appreciated.
Thanks,
Ganesh
Absolutely any image file has to be contained within a rectangular frame, this includes png and SVG.
Some image file formats can have what are called alpha channel backgrounds this allows you to see through transparent areas.
What you want to do is remove the white background to expose the alpha channel background in Photoshop (or similar tool) which can then be saved out as transparent.
For example in Photoshop:
If you open this image directly and have no other layers, double click the layer that says background and OK the confirmation box. This turns your flat image into a layered image
Select the magic wand tool and ensure you have a high tolerance set (3)
with the wand selected click the white area to bring up a marquee around your selection (the white background) and hit delete to remove it.
Your image should now have a chequered background which is the transparency showing through.
If you now go to file > save as and select png, your image should now be saved out with an alpha background.
Please note: There are further optimisations to make if this is for web, including file formats and file size but that is beyond the scope of this question but I encourage you to read up on the Gif format and it's restrictions, the difference between 8bit and 24bit pngs and how to use SVG.
You can do it pretty simply at the command-line using ImageMagick which is free and installed on most Linux distros and is available for OSX and Windows.
Basically, you want to make your whites transparent, so you would do
convert shape.png -transparent white result.png
If your whites are a little bit off-white, you could allow for some variation with a little fuzz as follows:
convert shape.png -fuzz 10% -transparent white result.png
I added the checkerboard background just so you can see it on StackOverflow's white background - it is not really there.
By the way, you may like to trim to the smallest bounding rectangle while you are there:
convert shape.png -fuzz 10% -transparent white -trim result.png
By the way, you can also draw your shapes with ImageMagick:
convert -size 150x150 xc: -fill none -stroke "rgb(74,135,203)" -draw 'stroke-width 90 ellipse 0,0 80,80 30,80' arc.png
See Anthony Thyssen's excellent examples here.

App graphic making (transparent and no extra spaces)

I am a coder but not a graphic maker. I can decently produce graphics that meet the quality standards visually although I cannot produce graphics that will technically "work." This is what I mean:
I am using CGRectIntersectsRect for colliding images. My image has SOME extra space which I have made completely transparent using Adobe PhotoShop but even if this extra transparent space is not visible, when the two images collide, it will look like you will be hitting nothing as this extra invisible transparent space is PART of the image and when CGRectIntersectsRect is called it detects touch between two images. So if the other image touches the transparent space, CGRectIntersectsRect is called and my code is executed. I only want my code to be executed if it hits the actual COLOR space of the image. Here is two things that could help me through that, they follow through with questions.
Learn how to make NO EXTRA SPACE on an image in photoshop. How could I do this, tutorials?
CGRectIntersectsRect only called when touching a color part of an image. A way to do this?
Thank you guys!
Regarding your question #1, it depends. All images are rectangular, all. So, if your sprite is rectangular, you can crop it in Photoshop to just the rectangular area. But if you want to handle, say, a circle ball, then you can't do such thing as "remove extra space". Your circle ball will always be stored in a rectangular image, with transparent space on the corners.
Learn how to make NO EXTRA SPACE on an image in photoshop. How could I do this, tutorials?
You can manually select an area using the Rectangular Marquee Tool and Image > Crop or automatically trim the image based on an edge pixel color using Image > Trim.
CGRectIntersectsRect only called when touching a color part of an image. A way to do this?
You can use pixel-perfect collisions or create better bounding shapes for your game objects. For example, instead of using pixel-perfect collision for a spaceship like this one, you could use a triangle for the wings, a rectangle for the body, and a triangle for the head.
Pixel-perfect collision
One way you could implement it would be to
Have an blank image in memory.
Draw visible pixels from one image in blue (#0000ff).
Draw visible pixels from the other image in red (#ff0000).
If there's any purple pixels in the image (#ff00ff), then there's an intersection.
Alternative collision detection solution
If your game is physics-based, then you can use a physics engine like Box2D. You can use circles, rectangles, and polygons to represent all of your game objects and it'll give you accurate results without unnecessary overhead.
For collision detection for non-rectangular shapes, you should look into one of the many game and/or physics libraries available for iOS. Cocos2d coupled with Box2d or chipmunk are popular choices.
If you want to do it yourself, you'll need to start with something like a custom CGPath tracing the actual shape of each object, then use a function like CGPathContainsPoint (that's from memory, it may be wrong). But it is not a simple job. Angry birds uses box2d, AFAIK.

Simple algorithm for tracking a rectangular blob

I have created an experimental fast rectangular object tracking system; it will be used for headtracking and controllling objects in 3D engine (Ogre3D).
For now I am able to show to the webcam any kind of bright colored rectangle (text markers are good objects) and system registers basic properties of this object (hue/value/lightness and initial width and height in 0 degrees rotation).
After I have registered the trackable object, I do some simple frame processing to create grayscale probabilty map.
So now I have 2 known things:
1) 4 corners for the last object position (it's always a rectangle but it may be rotated)
2) a pretty rectangular (but still far from perfect) blob which is the brightest in the frame. I can get coordinates of any point of the blob without problems, point detection is stable enough.
I can find a bounding rectangle of the object without problems, but I have a problem with detecting the object corners themselves.
I need the simplest possible (quick&dirty would be great) algorithm to scan the image starting with some known coordinates (a point inside the blob) and detect new 4 x,y coordinates of a "blobish" rectangle corners (not corners of a bounding box but corners of the rectangular blob itself).
Ready-to-use C++ function would be awesome, but somehow google doesn't like me today :(
I think that it would be overkill to use some complicated function form OpenCV library just to extract 4 points of a single rectanglular blob. But if you know a quick and efficient way how to do it using OpenCV (it must be real-time and light on CPU because I'll run the 3D engine at the same time) then I would be really grateful.
You can apply Hough transform on segmented image to detect lines. Using detected lines you can calculate their intersection to find the corner coordinates of the blob.