Matplotlib, can't see single non-zero pixel in hi-res image - matplotlib

I have some 1000x1000 images where one pixel is 1 and the rest 0. My issue is that I can't even see the location of the high pixel using imshow() unless I zoom in all over the place to look for it. I assume it is doing a nearest lookup when decimating to screen resolution. Is there any trick I can use to get around this? I know I can convolve with a kernel of some kind to expand the point, but this is a little bit expensive computationally, and if I zoom in it won't look correct.

Related

Large (in meters) landscape mesh has artifacts on peaks only at certain scale

I made a mesh from a Digital Elevation Map that spanned 1x1 degree box of geography, but when I scale the mesh up to 11139m in blender I get these visible jagged shadows on the peaks of the mesh. I'd prefer to not scale everything down but I suppose I can, it just seems like a strange issue I want to better understand.
My goal is to use the landscape in a WebVR application, but when I put this mesh into an Aframe scene it also has this issue. Thanks for any tips!
Quick answer:
I think this may be caused by the clipping start/end values. Also called near/far clipping planes. Adjusting them may fix the issue but also limit the rendering distance.
Longer explanation:
Take a look at this:
It's a simple grayscale, but imagine it is scaled across your entire scene depth (Z depth buffer). The range of this buffer is set by the start/stop clipping (near/far) camera setting.
By default Blender has its start/stop (near/far) clipping set to 0.01 - 1000.
While A-Frame has it like 0.005 - 10000. You may find more information here: A-Frame camera #properties
That means the renderer has to somehow fit every single point in that range somewhere on the grayscale. That may cause overlapping or Z-fighting because it is simply lacking precision to distinguish the details. And that is mainly visible at edges/peaks because the polygons are connected there at acute angles and the program has to round up the Z-values. That causes overlapping visible as darker shadows (most likely the backside of the polygon behind).
You may also want to read more about Z-fighting because it is somewhat related.
Example

Measuring sizes of before/after images in Photoshop

Perhaps my mind isn't mathematically competent enough to do this, but here it goes:
I am using Photoshop. I have 2 images taken from different heights. Both images have the same object in it (so the size of this object remains the same) but I am trying to resize both images so that this object is the same pixel size. That way I can properly measure the difference between other objects in the images with the proper ratio.
My end goal is to measure the differences of scars healing (before and after) using a same-size object in both images as a baseline.
To measure the difference in the photo, I have been counting pixels using the histogram feature:
Even though i changed the pixel width and height to roughly the same size, the 2 images have a drastically different number of pixels. So comparing the red or white from the before to the after won't make sense until I can get these to match.
Can anyone point me in the right direction here? How can I compare apples to apples here?
So went a different route here in case anyone was trying wondering what I did.
Rather than change the size of the images, just calculated the increase manually separately.

How to detect an image between shapes from camera

I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.

Re-sizing visual image while maintaining image dimensions

I'm working with documents, so maintaining the the original image dimensions and subsequent dpi is important.
The aspect ratio is always maintained so the automatic fill modes and alike don't seem to have any effect.
Say I have a 300 dpi document and the user want to clear an inch border around the image. So I need an inch cropped from the image but the result needs to be the original image dimensions (2550x3300).
I have been able to achieve this effect with...
...&crop=300,300,-300,-300&margin=300,300,300,300
This works, but seems more than a little clunky. I've tried a lot of other combinations but they all seem to enlarge or reduce the image size which is undesirable in my case.
So does someone know a simpler syntax to achieve the desired result, or do I need to re-size the image then calculate and fill with a margin as I'm doing now.
Thanks
It turns out that my example requests the image in it's full size which turns out to be a special case. When I introduce a width or height into the command line things don't work very well since crop size is in respect to the original image dimensions and margin size is in respect to the result image.
Thinking about it more I abandoned the crop approach. What I really needed was a way to introduce a clipping region into the result bitmap. So I built an extension to do just that. It works well as it doesn't interfere with any of Resizer's layout calculations and the size of the returned image is whatever the height or width were specified as. Which is just what I needed. The Faces plugin has an example of introducing a clipping region.
Karlton
Cropping and re-adding 300px on each edge is best accomplished exactly the way you're doing it:
&crop=300,300,-300,-300&margin=300
What kind of improved syntax would you expect? This isn't a common operation.

iOS Quartz/CoreGraphics drawing feathered stroke

I am drawing a path into a CGContext following a set of points collected from the user. There seems to be some random input jitter causing some of the line edges to look jagged. I think a slight feather would solve this problem. If I were using OpenGL ES I would simply apply a feather to the sprite I am stroking the path with; however, this project requires me to stay in Quartz/CoreGraphics and I can't seem to find a similar solution.
I have tried drawing 5 lines with each line slightly larger and more transparent to approximate a feather. This produces a bad result and slows performance noticeably.
This is the line drawing code:
CGContextMoveToPoint(UIGraphicsGetCurrentContext(),((int)lastPostionDrawing1.x), (((int)lastPostionDrawing1.y)));
CGContextAddCurveToPoint(UIGraphicsGetCurrentContext(), ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, lastPostionDrawing2.x, lastPostionDrawing2.y;
[currentPath addCurveToPoint:CGPointMake(lastPostionDrawing2.x-((int)furthestLeft.x)+((int)penSize), lastPostionDrawing2.y controlPoint1:CGPointMake(ctrl1_x, ctrl1_y) controlPoint2:CGPointMake(ctrl2_x, ctrl2_y)];
I'm going to go ahead and assume that your CGContext still has anti-aliasing turned on, but if not, then that's the obvious first think to try, as #Davyd's comment suggests: CGContextSetShouldAntialias is the function of interest.
Assuming that's not the problem, and the line is being anti-aliased by the context, but you're still wanting something 'softer.' I can think of a couple of ways to do this that should hopefully be faster than stroking 5 times.
First, you can try getting the stroked path (i.e. a path that describes the outline of the stroke of the current path) using CGContextReplacePathWithStrokedPath you can then fill this path with a gradient (or whatever other fill technique gives the desired results.) This will work well for straight lines, but won't be straightforward for curved paths (since the gradient is filling the area of the stroked path, and will be either linear or radial.)
Another perhaps less obvious option, might be to abuse CG's shadow drawing for this purpose. The function you want to look up is: CGContextSetShadowWithColor Here's the method:
Save the GState: CGContextSaveGState
Get the bounding box of the original path
Copy the path, translating it away from itself by 2.0 * bbox.width using CGPathCreateCopyByTransformingPath (note: use the X direction only, that way you don't need to worry about flips in the context)
Clip the context to the original bbox using CGContextClipToRect
Set a shadow on the context with CGContextSetShadowWithColor:
Some minimal blur (Start with 0.5 and go from there. The blur parameter is non-linear, and IME it's sort of a guess and check operation)
An offset equal to -2.0 * bbox width, and 0.0 height, scaled to base space. (Note: these offsets are in base space. This will be maddening to figure out, but assuming you're not adding your own scale transforms, the scale factor will either be 1.0 or 2.0, so practically speaking, you'll be setting an offset.width of either -2.0*bbox.width or -4.0*bbox.width)
A color of your choosing.
Stroke the translated-away path.
Pop the GState CGContextRestoreGState
This should leave you with "just" the shadow, which you can hopefully tweak to achieve the results you want.
All that said, CG's shadow drawing performance is, IME, less than completely awesome, and less than completely deterministic. I would expect it to be faster than stroking the path 5 times with 5 different strokes, but not overwhelmingly so.
It'll come down to how much achieving this effect is worth to you.