I have a grayscale image and I am measuring the luminosity with Photoshop luminosity histograms. However, I would like to know which measurement unit Photoshop uses (e.g. candela per meter square, lux, etc.).
I'm new to Photoshop and I did have a look online, but I cannot find this information.
Also, if Photoshop doesn't use candela per meter square, lux, etc., is there a way to calculate candela per meter square from Photoshop values?
Any help will be very much appreciated.
Photoshop doesn't use an actual unit of measurement. It uses percent values that impact on the result.
Related
I have two raster layers with different resolutions that I want to join see this image. One has higher resolution (transparent yellow) and the other raster has lower resolution but bigger extent (whole earth) and has information about different classes (drawn in different colors here). The resulting raster should have the higher resolution and extent of the raster drawn in yellow here but should be joined with the other raster, e.g. containing the information of what class it was laying within.
Really appreciate any help!
Cheers
You should to use mosaic rasters. Use Mosaic To New Raster tool.
But before rasters should have equal standart.
Use Resample tool for change resolution.
And your raster value system should be same scale. Arcmap will mosaic them anyway. But it can be wrong.
For example LST rasters. Suppose the raster is on a scale of 1 Fahrenheit and the raster is on a scale of 2 degrees Celsius.
In this case, even if the tools run and generate a new raster layer, the values will be incorrect.
I hope this answer was helpful.
Good luck with.
I have a problem with detection of chessboard-like pattern. The image is very noisy because it is registered with the use of laser scanner.
The only thing I have managed to achieve is detection of big rectangle:
Now I have no idea how to detect those small squares. I tried all sorts of different algorithms, but the contrast in the squares seems too low. Does anybody have any ideas?
Other pattern images: https://dl.dropboxusercontent.com/u/3681534/kalibrator/6.png https://dl.dropboxusercontent.com/u/3681534/kalibrator/8.png
A way to progress would be to determine the grayvalue level at the inner border of the rectangle, then:
Adjust the average brightness inside the rectangle border.
With that knowledge it is possible to adjust the average brightness inside the rectangle to one value (the small square will still be a bit lighter than the rest)
Increase the contrast a lot
Find the lines that run along the edges of the squares
Either access the line crossings directly or paint white and black
Calculate your calibration data
I'm importing my stimulus from a folder. I would like to make them bigger *the actual image size is 120 pix (height) x 170 pix (width). I've tried to double the size by using this code in the PsychoPy Coder:
stimuli.append(visual.ImageStim(win=win, name='image', units='cm', size= [9, 6.3],
(I used the double number in cms) but this distorts the image. Is it any way to enlarge it without it distorting, or do I have to change the stimuli itself?
Thank you
Just to answer what Michael said in the comment: no, if you scale an image up, the only way of guessing what is in between pixels is interpolation. This is what psychopy does and what ANY software would do. To make an analogy: take a picture of a distant tree using your digital camera. Then scale the image up using all kinds of software. You won't suddenly be able to see the individual leaves since the software had no such information as input.
If you need higher resolution, put higher resolution images in your folder. If it's simple shapes, you may use built-in methods such as visual.ShapeStim and it's variants: visual.Polygon, visual.Rect and visual.Circle. Psychopy can scale these shapes freely so they always stay sharp.
I'm searching for a methods of text recognition based on document borders.
Or the methods that can solve the problem of finding new viewpoint.
For exmp. the camera is in point (x1,y1,z1) and the result picture with perspective distortions, but we can find (x2,y2,z2) for camera to correct picture.
Thanks.
The usual approach, which assumes that the document's page is approximately flat in 3D space, is to warp the quadrangle encompassing the page into a rectangle. To do so you must estimate a homography, i.e. a (linear) projective transformation between the original image and its warped counterpart.
The estimation requires matching points (or lines) between the two images, and a common choice for documents is to map the page corners in the original images to the image corners of the warped image. This will in general produce a rectangle with an incorrect aspect ratio (i.e. the warped page will look "wider" or "taller" than the real one), but this can be easily corrected if you happen to know in advance what the real aspect ratio is (for example, because you know the type of paper used, whether letter, A4, etc.).
A simple algorithm to perform the estimation is the so-called Direct Linear Transformation.
The OpenCV library contains routines to help accomplishing all these tasks, look into it.
I have a robot and a camera. The robot is just a 3D printer where I changed the extruder for a tool, so it doesn't print but it moves every axis independently. The bed is transparent, and below the bed there is a camera, the camera never moves. It is just a normal webcam (playstation eye).
I want to calibrate the robot and the camera, so that when I click on a pixel on a image provided by the camera, the robot will go there. I know I can measure the translation and the rotation between the two frames, but that will probably return lots of errors.
So that's my question, how can I relate the camera and a robot. The camera is already calibrated using chessboards.
In order to make everything easier, the Z-axis can be ignored. So the calibration will be over X and Y.
It depends of what error is acceptable for you.
We have similar setup where we have camera which looks at some plane with object on it that can be moved.
We assume that the image and plane are parallel.
First lets calculate the rotation. Put the tool in such position that you see it on the center of the image, move it on one axis select the point on the image that is corresponding to tool position.
Those two points will give you a vector in the image coordinate system.
The angle between this vector and original image axis will give the rotation.
The scale may be calculated in the similar way, knowing the vector length (in pixels) and the distance between the tool positions(in mm or cm) will give you the scale factor between the image and real world axis.
If this method won't provide enough accuracy you may calibrate the camera for distortion and relative position to the plane using computer vision techniques. Which is more complicated.
See the following links
http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html
http://dasl.mem.drexel.edu/~noahKuntz/openCVTut10.html