How to crop based on scaled dimensions? - imageresizer

Is there a way to scale an image down in size by say, 50 percent, and crop the scaled output? I've been tinkering with this, but it seems that any cropping is based on the original image dimensions rather than the scaled dimensions.

Cropping by percentage:
?crop=20,20,80,80&cropxunits=100&cropyunits=100
Cropping by the output size:
?width=400&height=300&crop=80,60,320,240&cropxunits=400&cropyunits=300

Related

when i resize big tiff image to less than 25% ( say 17% ), the pixels are wrapped from left to right

im using image magick v7.16.0. i have a very big image of size 440*1700. i want this to be resized to 17% of the original Size.
I tried with Resize, Scale and Rescale methods with intended width/height parameters and also with MagicGeometry.
But the resized image wraps the pixels from left side to right side.Click here for the image
Can any one help me in understanding why this wrapping is seen ?
This wrapping is seen only when i resize to less than 25%. [ when i resize to 50% or 80% i dont see this wrapping ]

Imageresizer doe not crop the width but only crops the height

Having an issue with my images not being cropped using imageresizer. It crops fine on the height but does not crop the width.
here is an example: https://media.hillarys.co.uk/asset//media/10222/zen-collection-mishima-dawn-curtains.jpg?width=850&height=450&mode=Crop&quality=70
If I set the height to 400 it crops fine. However if I set the width to 300. It will resize the image.
This is really starting to get frustrating now. Any help?
Thanks
I see an image that is 850x450 pixels.
mode=crop (compared to mode=max or mode=pad) minimally crops to achieve the required aspect ratio, then scales the image down the the precise dimensions you requested.
If you don't want a minimal crop, then you should use crop=x1,y1,x2,y2. These can be expressed as percentages instead of coordinates with cropxunits=100&cropyunits=100.

Get image height and width in centemeters

When I drag an image into Photoshop and go to "image size" I can see the size in cm (eg: 80X30 cm) In Windows 7, the only details you can see about the image regarding size is pixels and resolution. Is there a way in VBNET to get the image height and width in centimeters?
actually one Pixel = 0.264583 Millimeter.
There is a difference between Dots Per Inch (DPI) and Device-Independent Pixel(DIPs)
DPI and Device-Independent Pixels
Best Regards

UIImage resizing - how scaleFactor value derived is unclear

This is possibly a dumb question, but I'm not seeing how the math works here. I have an image, 414w x 584h. To get an image scaled down to half this size (i.e., half the initial width and height), using [UIImage imageWithCGImage:scaleFactor:orientation], I have to set scaleFactor to 6.0.
Why is it 6.0? How does this value relate to say, a width scale of 414/207 = 2.0, or a height scale, same value, 584/292 = 2.0?
As I write this, I'm wondering... my app is running on an iPhone 6+. So could it have something to do with the 3x Retina display? I.e., normal scale factor of 2.0, which is dimensionless, but when applied to images on the 6+, to get to pixels, I have to do 3x this? Is this the logic?
And, I guess, while I'm here, is there a better way to resize an image, using available iOS facilities? E.g., some special affine transform, etc.? No particular concerns around memory or performance; the images are all no more than 1000 wide by 1500 or so high.
Thanks a lot!
You can find a grt tutorial at http://nshipster.com/image-resizing/
The Documentation says:
The scale factor to use when interpreting the image data. Specifying a scale factor of 1.0 results in an image whose size matches the pixel-based dimensions of the image. Applying a different scale factor changes the size of the image as reported by the size property.
For Scale:
If you load an image from a file whose name includes the #2x modifier,
the scale is set to 2.0. You can also specify an explicit scale factor
when initializing an image from a Core Graphics image. All other
images are assumed to have a scale factor of 1.0.
If you multiply the logical size of the image (stored in the size
property) by the value in this property, you get the dimensions of the
image in pixels.
For size:
In iOS 4.0 and later, this value reflects the logical size of the
image and is measured in points. In iOS 3.x and earlier, this value
always reflects the dimensions of the image measured in pixels

drawInRect loses resolution when drawn to smaller image?

When i draw a large image (say 1000x1000 pixel) using drawInRect method with size say 200x200 pixel and again i use drawInRect method to draw the image to its original size (1000x1000 pixel) does the resolution affect by using this ? Does the resolution decreases by drawing large image into small and again that same image to large image ?
Hopefully I've gotten your question correct in my head.
If you take an image bigger than 200x200 pixels and draw it into a 200x200 pixel rectangle, it'll get scaled down and lose most of its detail. If you then take the resultant image and try to draw it in a bigger rectangle it'll just get scaled up. So, to answer you're question, yes. It'll look blurry as hell. It's no different than resizing an image down in a graphics editor then blowing it back up to its original size. The loss of detail is permanent; there's no way to know what was lost in the transition down.