Detection of chessboard-like pattern in OpenCV - chess

I have a problem with detection of chessboard-like pattern. The image is very noisy because it is registered with the use of laser scanner.
The only thing I have managed to achieve is detection of big rectangle:
Now I have no idea how to detect those small squares. I tried all sorts of different algorithms, but the contrast in the squares seems too low. Does anybody have any ideas?
Other pattern images: https://dl.dropboxusercontent.com/u/3681534/kalibrator/6.png https://dl.dropboxusercontent.com/u/3681534/kalibrator/8.png

A way to progress would be to determine the grayvalue level at the inner border of the rectangle, then:
Adjust the average brightness inside the rectangle border.
With that knowledge it is possible to adjust the average brightness inside the rectangle to one value (the small square will still be a bit lighter than the rest)
Increase the contrast a lot
Find the lines that run along the edges of the squares
Either access the line crossings directly or paint white and black
Calculate your calibration data

Related

Large (in meters) landscape mesh has artifacts on peaks only at certain scale

I made a mesh from a Digital Elevation Map that spanned 1x1 degree box of geography, but when I scale the mesh up to 11139m in blender I get these visible jagged shadows on the peaks of the mesh. I'd prefer to not scale everything down but I suppose I can, it just seems like a strange issue I want to better understand.
My goal is to use the landscape in a WebVR application, but when I put this mesh into an Aframe scene it also has this issue. Thanks for any tips!
Quick answer:
I think this may be caused by the clipping start/end values. Also called near/far clipping planes. Adjusting them may fix the issue but also limit the rendering distance.
Longer explanation:
Take a look at this:
It's a simple grayscale, but imagine it is scaled across your entire scene depth (Z depth buffer). The range of this buffer is set by the start/stop clipping (near/far) camera setting.
By default Blender has its start/stop (near/far) clipping set to 0.01 - 1000.
While A-Frame has it like 0.005 - 10000. You may find more information here: A-Frame camera #properties
That means the renderer has to somehow fit every single point in that range somewhere on the grayscale. That may cause overlapping or Z-fighting because it is simply lacking precision to distinguish the details. And that is mainly visible at edges/peaks because the polygons are connected there at acute angles and the program has to round up the Z-values. That causes overlapping visible as darker shadows (most likely the backside of the polygon behind).
You may also want to read more about Z-fighting because it is somewhat related.
Example

How does Blender calculate vertex normals?

I'm attempting to calculate vertex normals for various game assets. The normals I calculate are used for "inflating" the model (to draw behind the real model producing a thick outline).
I currently compute the normal for each face and average all of them (several other questions on Stack Overflow suggest this approach). However, this doesn't work for sharp corners like this one (adjacent faces' normals marked in orange, the normal I'm trying to calculate is outlined in green).
The object looks like a small pedestal and we're looking at the front-left corner. There are three adjoining faces (the bottom face isn't visible; its normal points straight down).
Blender computes an excellent normal that lies squarely in the middle of the three faces' normals; it seems like it somehow calculates a normal that has minimum rotation to each of the three face normals. Blender's normal also doesn't change when the quads are triangulated differently.
Averaging the faces' normals gives me a different normal that points slightly upward in the Z-axis (-0.45, -0.89, +0.08). Inflating my model this way doesn't produce a good outline because the bottom face of the outline is shifted up and doesn't enclose the original model.
I attempted to look at the Blender source code but couldn't find what I was looking for. If anyone can point me to the algorithm in the Blender source, I'd accept that also.
Weight the surface normals by the angle of the faces where they join. It is a common practice in surface rendering (see discussion here: http://www.bytehazard.com/code/vertnorm.html), and will ensure that your bottom face is weighted stronger than the two slanted side faces. I don't know if Blender does it differently, but you should give it a try.

Trace a ccsprite cocos2d-iphone

I have a layer with a sprite of a simple black donut. I want the user to be able to draw on the sprite in a different color (which I've managed to do without any problem using CCRenderTexture).
My question is how I can calculate whether the image has been traced at least 95% (meaning, find out when 95% of the black pixels are now the new color). I've tried methods like taking a screenshot of the layer and counting the number of black pixels, but it hasn't worked that well (using this solution: https://stackoverflow.com/a/1262893/1577738).
It would be even better if I could just change the color of each pixel as it's touched (to avoid issues with coloring out of the lines). I could theoretically just split the donut into like 10 sprites and change that section's color if the user touches it, but that seems ridiculous if I give the user options to use a bunch of different colors.
Am I going about this the wrong way? Your suggestions are much appreciated!
Reading pixel colors will be rather inaccurate and slow. I suggest dividing the area into smaller rectangles (ie 8x8 or 4x4) and then flag each as "visited" when the user draws on it. If most rectangle areas are flagged, the user has drawn on most parts of the texture.

App graphic making (transparent and no extra spaces)

I am a coder but not a graphic maker. I can decently produce graphics that meet the quality standards visually although I cannot produce graphics that will technically "work." This is what I mean:
I am using CGRectIntersectsRect for colliding images. My image has SOME extra space which I have made completely transparent using Adobe PhotoShop but even if this extra transparent space is not visible, when the two images collide, it will look like you will be hitting nothing as this extra invisible transparent space is PART of the image and when CGRectIntersectsRect is called it detects touch between two images. So if the other image touches the transparent space, CGRectIntersectsRect is called and my code is executed. I only want my code to be executed if it hits the actual COLOR space of the image. Here is two things that could help me through that, they follow through with questions.
Learn how to make NO EXTRA SPACE on an image in photoshop. How could I do this, tutorials?
CGRectIntersectsRect only called when touching a color part of an image. A way to do this?
Thank you guys!
Regarding your question #1, it depends. All images are rectangular, all. So, if your sprite is rectangular, you can crop it in Photoshop to just the rectangular area. But if you want to handle, say, a circle ball, then you can't do such thing as "remove extra space". Your circle ball will always be stored in a rectangular image, with transparent space on the corners.
Learn how to make NO EXTRA SPACE on an image in photoshop. How could I do this, tutorials?
You can manually select an area using the Rectangular Marquee Tool and Image > Crop or automatically trim the image based on an edge pixel color using Image > Trim.
CGRectIntersectsRect only called when touching a color part of an image. A way to do this?
You can use pixel-perfect collisions or create better bounding shapes for your game objects. For example, instead of using pixel-perfect collision for a spaceship like this one, you could use a triangle for the wings, a rectangle for the body, and a triangle for the head.
Pixel-perfect collision
One way you could implement it would be to
Have an blank image in memory.
Draw visible pixels from one image in blue (#0000ff).
Draw visible pixels from the other image in red (#ff0000).
If there's any purple pixels in the image (#ff00ff), then there's an intersection.
Alternative collision detection solution
If your game is physics-based, then you can use a physics engine like Box2D. You can use circles, rectangles, and polygons to represent all of your game objects and it'll give you accurate results without unnecessary overhead.
For collision detection for non-rectangular shapes, you should look into one of the many game and/or physics libraries available for iOS. Cocos2d coupled with Box2d or chipmunk are popular choices.
If you want to do it yourself, you'll need to start with something like a custom CGPath tracing the actual shape of each object, then use a function like CGPathContainsPoint (that's from memory, it may be wrong). But it is not a simple job. Angry birds uses box2d, AFAIK.

Simple algorithm for tracking a rectangular blob

I have created an experimental fast rectangular object tracking system; it will be used for headtracking and controllling objects in 3D engine (Ogre3D).
For now I am able to show to the webcam any kind of bright colored rectangle (text markers are good objects) and system registers basic properties of this object (hue/value/lightness and initial width and height in 0 degrees rotation).
After I have registered the trackable object, I do some simple frame processing to create grayscale probabilty map.
So now I have 2 known things:
1) 4 corners for the last object position (it's always a rectangle but it may be rotated)
2) a pretty rectangular (but still far from perfect) blob which is the brightest in the frame. I can get coordinates of any point of the blob without problems, point detection is stable enough.
I can find a bounding rectangle of the object without problems, but I have a problem with detecting the object corners themselves.
I need the simplest possible (quick&dirty would be great) algorithm to scan the image starting with some known coordinates (a point inside the blob) and detect new 4 x,y coordinates of a "blobish" rectangle corners (not corners of a bounding box but corners of the rectangular blob itself).
Ready-to-use C++ function would be awesome, but somehow google doesn't like me today :(
I think that it would be overkill to use some complicated function form OpenCV library just to extract 4 points of a single rectanglular blob. But if you know a quick and efficient way how to do it using OpenCV (it must be real-time and light on CPU because I'll run the 3D engine at the same time) then I would be really grateful.
You can apply Hough transform on segmented image to detect lines. Using detected lines you can calculate their intersection to find the corner coordinates of the blob.