Converting Images with High DPI and Low Pixels to Low DPI and High Pixels - photoshop

I'm currently in the process of making an interactive map. From previous knowledge I learned that it was all about pixel count so that the map could cover a large area. This map needs to cover an area twice the size of the Earth.
My original map is at 35000 x 20000 pixels # 300 dpi. Which gives me great zoomability for the interactive map.
However, I was just contacted by an artist that offered to make the map. And they suggested a size of 9000 x 6000 pixels but at 1200 dpi. Saying that this can be resized up to my original needs without the loss of quality.
My issue here is that I don't understand how that could work. And so, I don't want to start the job and paying for it. Until I'm sure that the end result will be as by my needs.
PS: I do not know if this is the right stackoverflow to be asking this question. And looking at the "Similar Questions" in the right pane, suggest it isn't. If that is true, perhaps somebody can point me in the right direction.

This obviously won't work: DPI matters only for printing. So 9000x6000 in whatever DPI will have to be scaled up to 35000x20000 with loss of quality. Maybe the artist meant something else..?
And the proper stack exchange for this question would be a https://graphicdesign.stackexchange.com

Related

Matplotlib, can't see single non-zero pixel in hi-res image

I have some 1000x1000 images where one pixel is 1 and the rest 0. My issue is that I can't even see the location of the high pixel using imshow() unless I zoom in all over the place to look for it. I assume it is doing a nearest lookup when decimating to screen resolution. Is there any trick I can use to get around this? I know I can convolve with a kernel of some kind to expand the point, but this is a little bit expensive computationally, and if I zoom in it won't look correct.

Viola-Jones - what does the 24x24 window mean?

I'm learning about the Viola-James detection framework and I read that it uses a 24x24 base detection window[1][2]. I'm having problems understanding this base detection window.
Let's say I have an image of size 1280x960 pixels and 3 people in it. When I try to perform face detection on this image, will the algorithm:
Shrink the picture to 24x24 pixels,
Tile the picture with 24x24 pixel large sections and then test each section,
Position the 24x24 window in the top left of the image and then move it by 1px over the whole image area?
Any help is appreciated, even a link to another explanation.
Source: https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf
[1] - page 2, last paragraph before Integral images
[2] - page 4, Results
Does this video help? It is 40 minutes long.
Adam Harvey Explains Viola-Jones Face Detection
Also called Haar Cascades, the algorithm is very popular for face-detection.
About half way down that page is another video which shows a super slow-mo scan in progress so you can see how the window starts small (although much larger than 24x24 for the purpose of demonstration) and shifts around the image pixel by pixel, then does it again and again on successively larger square portions. At each stage, it's still only looking at those windows as though they were resampled to the 24x24 size.
You can also see how it quickly rejects many of those windows and spends most of its time in areas that seem face-like while it computes more and more complex comparisons that become more stringent. This is where the term "cascade" comes into play.
I found this video that perfectly explains how the detection window moves and scales on a picture. I wanted to draw a flowchart how this looks but I think the video illustrates it better:
https://vimeo.com/12774628
Credits to the original author of the video.

GIMP & Photoshop Gaussian Blur issue?

I'm trying hard to nicely blur a red circle but everytime i get gradient levels of red and the image looks choppy.
Before:
http://i.imgur.com/6yzMhFI.png
After:
http://i.imgur.com/2dZl4ph.png
How i can acheive a smooth blur ?
If you are referring to the visible circles that separate the gradation levels, that is called banding Here are some ways to fix that:
Increase your document's bit level from 8-bit to 16-bit
This will increase the amount of colors your file can represent, creating more colors that can be used to represent the gradient, making it smoother in appearance.
In Photoshop navigate to Image>Mode>16-Bits/Channel
In GIMP 2.10 (or higher?), navigate to Image>Precision>16 bit..
Display or system settings might be unable to display enough colors
If changing the bit depth does not fix the issue then you might have a hardware or system settings issue.
If it's a hardware issue, your monitor might not have the capability to display enough colors to render the gradient smooth
If it's system settings you will need to go to your operating systems color depth setting, usually located under the system's display settings. It could say something like Millions of Colors, or True Color (32-bit).
The last thing related to settings is that you have a bad color profile set in your system or in your image editing software. It's beyond the scope of this answer. If you don't know how to color calibrate your monitor, then it most likely isn't this and you can skip this.
If you have to have 8-bits
If you absolutely have to keep your document in 8-bit color space then you will have to use dithering or add some noise to your image to confuse the viewers brain into seeing a smooth gradient.
Noise or dithering will confuse the viewers brain into seeing a smoother gradient by setting some focus on the imperfections of the noise/grain/dithering. This doesn't exactly answer your question, but it is about the only option you have if you keep your ultra smooth gradient in 8-bit mode.
Good Luck!
I think you are applying the Gussain-Blur to the entire image try to Select the red circle and apply the Gussain-Blur filter to it

Is there any way I can enlarge a stimulus in #psychopy without losing image quiality?

I'm importing my stimulus from a folder. I would like to make them bigger *the actual image size is 120 pix (height) x 170 pix (width). I've tried to double the size by using this code in the PsychoPy Coder:
stimuli.append(visual.ImageStim(win=win, name='image', units='cm', size= [9, 6.3],
(I used the double number in cms) but this distorts the image. Is it any way to enlarge it without it distorting, or do I have to change the stimuli itself?
Thank you
Just to answer what Michael said in the comment: no, if you scale an image up, the only way of guessing what is in between pixels is interpolation. This is what psychopy does and what ANY software would do. To make an analogy: take a picture of a distant tree using your digital camera. Then scale the image up using all kinds of software. You won't suddenly be able to see the individual leaves since the software had no such information as input.
If you need higher resolution, put higher resolution images in your folder. If it's simple shapes, you may use built-in methods such as visual.ShapeStim and it's variants: visual.Polygon, visual.Rect and visual.Circle. Psychopy can scale these shapes freely so they always stay sharp.

what ratio, from pixel to meter will be best and preferable?

I'm using following ratio for pixel to meter conversion,
PTM_RATIO=32;
v3BodyDef.position.Set(2848/PTM_RATIO, 102/PTM_RATIO);
This this produce weird output many times on the screen, so does setting position(v3BodyDef.position.Set) take floating point variable or not I don't know, but I think this conversion making trouble.
Please help me with this.
Thank you.
There isn't a recommendable ratio for that (though some will try and convince you there is).
The scale of objects in your physics engine should depend on the average scale of your dynamic objects. What I mean is, that if your player interacts with a lot of objects "slightly larger" and "slightly smaller" than itself, it's probably best to make player an average size in the optimal range (for example, Box2D is optimized for objects between 0.1m and 10m in size, so make player 1m, or 1.5m).
As for your pixel size, that all depends on how large you want your world to be on the screen.
If you want your hero to be 1/10th of the screen in height, and 2 meters away from the camera, then do the math :-p Others may want their here to be 1/8th of screen height, or 1/12th.. that really depends on how the game will look in the end. If the camera zooms in, the pixel to physics ratio would change. If the screen resolution changes (like a retina display), your pixel to physics ratio will have to change accordingly.
So in practice: there is no set value. It really depends on the game, and depends on what feels best for the hardware you're on.
It's most likely an integer division problem, change PTM_RATIO to a float (or if you are defining it, use #define PTM_RATIO 16.0f )