I'm trying to do a image resizing operation where:
Image stock is at 2x dimension (for Retina)
If device detected is low resolution (standard), reduce image by 50% back to 1x (i.e. zoom=0.5)
If device max resolution is 800, set the image not going over 800 (i.e. maxwidth=800)
However, when I combine the two operations (i.e. zoom=0.5&maxwidth=800), it basically give me an image that is 800 x 50% = 400. But I would like to have the image first reduced by 50% (e.g. if image was 2000w x 1000h, reduce it to 1000w x 500h), then make sure width does not go over 800 (i.e. 800w x 400h).
Is there any way to approach this?
Thanks in advance!
Stephen
Zoom operates after all other sizing operations, such as width/height/maxwidth/maxheight. This ensures that you can add 'zoom' to any image command set and zoom the result.
I.e, Zoom multiplies the 'result' size, not the source size.
If you're doing responsive images, you should consider trying slimmage.js. It's rather ideal if you want to handle both pixel densities and CSS viewport or element rules effectively together.
If you really need to build your own solution, you'll need to do the math either client side (and set maxwidth alone) or server side (and add a command that applies your custom rules).
Full disclosure: I wrote Slimmage, so I personally think it's the best all-around solution, of course :)
Related
Perhaps my mind isn't mathematically competent enough to do this, but here it goes:
I am using Photoshop. I have 2 images taken from different heights. Both images have the same object in it (so the size of this object remains the same) but I am trying to resize both images so that this object is the same pixel size. That way I can properly measure the difference between other objects in the images with the proper ratio.
My end goal is to measure the differences of scars healing (before and after) using a same-size object in both images as a baseline.
To measure the difference in the photo, I have been counting pixels using the histogram feature:
Even though i changed the pixel width and height to roughly the same size, the 2 images have a drastically different number of pixels. So comparing the red or white from the before to the after won't make sense until I can get these to match.
Can anyone point me in the right direction here? How can I compare apples to apples here?
So went a different route here in case anyone was trying wondering what I did.
Rather than change the size of the images, just calculated the increase manually separately.
I have some 1000x1000 images where one pixel is 1 and the rest 0. My issue is that I can't even see the location of the high pixel using imshow() unless I zoom in all over the place to look for it. I assume it is doing a nearest lookup when decimating to screen resolution. Is there any trick I can use to get around this? I know I can convolve with a kernel of some kind to expand the point, but this is a little bit expensive computationally, and if I zoom in it won't look correct.
I have a large background and am currently using texture atlasing to display it (It's 2000x2000). This works great, however when I scale the node out to a certain extent, black space appears like so:
http://imgur.com/TStVRtR
I used the following code to scale it (With map being the node variable):
map.runAction(SKAction.scaleTo(0.1, duration: 2))
So with all this in mind, is there a way to make it not show that black space? Instead of showing the black space it will simply tile the image so it doesn't show?
Your background is just repeated squares, isn't it? Instead of having one 2000 x 2000 you could have 100 items of 200 x 200 tiles. Am I correct?
If so, you can add those evenly spread out 100 tiles to an parent SKNode, and you can then scale that parent node. If you want to scale out even more I guess you need to add more tiles to that SKNode.
As an optimization, you can replace 10 x 10 tiles with only one tile with another texture, but that's only an optimization. Don't do that unless you have to.
I'm importing my stimulus from a folder. I would like to make them bigger *the actual image size is 120 pix (height) x 170 pix (width). I've tried to double the size by using this code in the PsychoPy Coder:
stimuli.append(visual.ImageStim(win=win, name='image', units='cm', size= [9, 6.3],
(I used the double number in cms) but this distorts the image. Is it any way to enlarge it without it distorting, or do I have to change the stimuli itself?
Thank you
Just to answer what Michael said in the comment: no, if you scale an image up, the only way of guessing what is in between pixels is interpolation. This is what psychopy does and what ANY software would do. To make an analogy: take a picture of a distant tree using your digital camera. Then scale the image up using all kinds of software. You won't suddenly be able to see the individual leaves since the software had no such information as input.
If you need higher resolution, put higher resolution images in your folder. If it's simple shapes, you may use built-in methods such as visual.ShapeStim and it's variants: visual.Polygon, visual.Rect and visual.Circle. Psychopy can scale these shapes freely so they always stay sharp.
I'm working with documents, so maintaining the the original image dimensions and subsequent dpi is important.
The aspect ratio is always maintained so the automatic fill modes and alike don't seem to have any effect.
Say I have a 300 dpi document and the user want to clear an inch border around the image. So I need an inch cropped from the image but the result needs to be the original image dimensions (2550x3300).
I have been able to achieve this effect with...
...&crop=300,300,-300,-300&margin=300,300,300,300
This works, but seems more than a little clunky. I've tried a lot of other combinations but they all seem to enlarge or reduce the image size which is undesirable in my case.
So does someone know a simpler syntax to achieve the desired result, or do I need to re-size the image then calculate and fill with a margin as I'm doing now.
Thanks
It turns out that my example requests the image in it's full size which turns out to be a special case. When I introduce a width or height into the command line things don't work very well since crop size is in respect to the original image dimensions and margin size is in respect to the result image.
Thinking about it more I abandoned the crop approach. What I really needed was a way to introduce a clipping region into the result bitmap. So I built an extension to do just that. It works well as it doesn't interfere with any of Resizer's layout calculations and the size of the returned image is whatever the height or width were specified as. Which is just what I needed. The Faces plugin has an example of introducing a clipping region.
Karlton
Cropping and re-adding 300px on each edge is best accomplished exactly the way you're doing it:
&crop=300,300,-300,-300&margin=300
What kind of improved syntax would you expect? This isn't a common operation.