Using version Linux C++ 3.1.1-2802 version the SDK.
The descriptions of FaceDetectorMode::LARGE_FACES (To target faces occupying a large area) and FaceDetectorMode::SMALL_FACES (To target faces occupying a small area) are a bit vague/confusing.
By "large" is it, for instance, faces that occupy more than 30% of the image's area or between 50% and 80%, or a certain amount of pixels, or what?
Experimenting with PhotoDetector with the FaceDetectorMode::LARGE_FACES and an image that contains 1 face at multiple resolutions (720p, 480p, 240p) I've found that the by face can't occupy most of the image (more than around 30% of the image's width/height) and has to be a certain minimum size in pixels in order to be detected but I can't figure out the relationship.
The main difference in large face and small face mode is what is the minimum size of face they will detect. The image property of interest is the minimum image dimension:
Large face: find faces that are at least ~15% the min. image dimension
Small face: find faces that are at least ~4% the min. image dimension
Example: In a 640x480 image, 480 is the dimension of interest and that would work out to 72 (0.15 * 480) and 20 (0.04 * 480) pixels for large face mode and small face mode respectively.
The reason there are two modes is that we've tuned each mode for an accuracy & speed tradeoff that works best for each. Small faces for image collections. Large faces for a person in-front of a camera.
Related
Perhaps my mind isn't mathematically competent enough to do this, but here it goes:
I am using Photoshop. I have 2 images taken from different heights. Both images have the same object in it (so the size of this object remains the same) but I am trying to resize both images so that this object is the same pixel size. That way I can properly measure the difference between other objects in the images with the proper ratio.
My end goal is to measure the differences of scars healing (before and after) using a same-size object in both images as a baseline.
To measure the difference in the photo, I have been counting pixels using the histogram feature:
Even though i changed the pixel width and height to roughly the same size, the 2 images have a drastically different number of pixels. So comparing the red or white from the before to the after won't make sense until I can get these to match.
Can anyone point me in the right direction here? How can I compare apples to apples here?
So went a different route here in case anyone was trying wondering what I did.
Rather than change the size of the images, just calculated the increase manually separately.
This is possibly a dumb question, but I'm not seeing how the math works here. I have an image, 414w x 584h. To get an image scaled down to half this size (i.e., half the initial width and height), using [UIImage imageWithCGImage:scaleFactor:orientation], I have to set scaleFactor to 6.0.
Why is it 6.0? How does this value relate to say, a width scale of 414/207 = 2.0, or a height scale, same value, 584/292 = 2.0?
As I write this, I'm wondering... my app is running on an iPhone 6+. So could it have something to do with the 3x Retina display? I.e., normal scale factor of 2.0, which is dimensionless, but when applied to images on the 6+, to get to pixels, I have to do 3x this? Is this the logic?
And, I guess, while I'm here, is there a better way to resize an image, using available iOS facilities? E.g., some special affine transform, etc.? No particular concerns around memory or performance; the images are all no more than 1000 wide by 1500 or so high.
Thanks a lot!
You can find a grt tutorial at http://nshipster.com/image-resizing/
The Documentation says:
The scale factor to use when interpreting the image data. Specifying a scale factor of 1.0 results in an image whose size matches the pixel-based dimensions of the image. Applying a different scale factor changes the size of the image as reported by the size property.
For Scale:
If you load an image from a file whose name includes the #2x modifier,
the scale is set to 2.0. You can also specify an explicit scale factor
when initializing an image from a Core Graphics image. All other
images are assumed to have a scale factor of 1.0.
If you multiply the logical size of the image (stored in the size
property) by the value in this property, you get the dimensions of the
image in pixels.
For size:
In iOS 4.0 and later, this value reflects the logical size of the
image and is measured in points. In iOS 3.x and earlier, this value
always reflects the dimensions of the image measured in pixels
I'm trying to do a image resizing operation where:
Image stock is at 2x dimension (for Retina)
If device detected is low resolution (standard), reduce image by 50% back to 1x (i.e. zoom=0.5)
If device max resolution is 800, set the image not going over 800 (i.e. maxwidth=800)
However, when I combine the two operations (i.e. zoom=0.5&maxwidth=800), it basically give me an image that is 800 x 50% = 400. But I would like to have the image first reduced by 50% (e.g. if image was 2000w x 1000h, reduce it to 1000w x 500h), then make sure width does not go over 800 (i.e. 800w x 400h).
Is there any way to approach this?
Thanks in advance!
Stephen
Zoom operates after all other sizing operations, such as width/height/maxwidth/maxheight. This ensures that you can add 'zoom' to any image command set and zoom the result.
I.e, Zoom multiplies the 'result' size, not the source size.
If you're doing responsive images, you should consider trying slimmage.js. It's rather ideal if you want to handle both pixel densities and CSS viewport or element rules effectively together.
If you really need to build your own solution, you'll need to do the math either client side (and set maxwidth alone) or server side (and add a command that applies your custom rules).
Full disclosure: I wrote Slimmage, so I personally think it's the best all-around solution, of course :)
I have an app that creates a large bitmap and later the user can add some labels. Everything works great as long as the base bitmap is the default 96x96 resolution. If I bump it up to 300 for instance, then the text applied with Graphics.DrawString is much too large - a petite size 8 or 10 font displays like it is 20.
On the one hand, it makes sense given the resolution increase, but on the other, you'd think the Fonts would scale. MeasureString returns a larger size when measured on a 300 vs 96 dpi bitmap, which wasn't really what I expected.
I've tried tricking it by creating a small bitmap of the appropriate size, printing to it, then pasting that to the master image. But when pasted to the high res it enlarges the pasted image.
The only other thing I can think of is to create a high res temp bitmap, print to it, then shrink it before pasting to the main image. That seems like a long way to go. Is there a compositing or overlay type setting that allows this? Are font sizes only true for a 96 dpi canvas?
Thanks for any hints/advice!
The size of a font is expressed in inches. One point is 1/72 inch. So if you draw into a bitmap that has 300 dots-per-inch then your font is going to use a lot more dots for the requested number of inches. So when you display it on a 300 dpi display then you'll get the size in inches back that you asked for.
Problem is, you are not displaying it a 300 dpi device, you are displaying it on a 96 dpi device. So it looks much bigger.
Clearly you don't really want a 300 dpi bitmap. Or you want to draw it three times smaller. Take your pick.
If you want a consistent size in pixels, specify UnitPixel when creating your Font object.
My cocos2d-iphone game has tiled maps. The tilesets textures are rather big. I got around 5 tilesets and each one is 2048x2048 (retina).
My maps are around 80x80. They have around 8 layers and each one is obviously using one tileset.
The frame rate falls (it goes around 30 sometimes. I know 30 is rather aceptable, but still, I want 50+).
So given that textures are huge I can't afford to make many layers (since each one loads a texture of these).
So how about I divide my tileset textures into much smaller tilesets (like 1024x1024 each)? That will allow me to use many more layers for my maps, right?
Are there any other tips for huge retina display tile maps?
Texture with 2048x2048 and 32-Bit colors equals 16 MB (!) of memory. Five times that is 80 MB of memory just for the textures. Ouch! For a tilemap that is relatively tiny (80x80) that's an enormous amount of texture memory.
First order of optimization would be to use PVR textures if you really can't reduce the number of tilesets or images within them. You lose some image quality but the memory consumption will go down dramatically, and the rendering performance of PVR textures is a lot better. Of course while working with Tiled you'll have to use the (presumably) PNG texture which you then convert to PVR for use in the project, for example using TexturePacker.
8 tile layers can be pretty hefty, but depends on how you use them and how many tiles of each layer are actually drawn. Try this: set all but one layer to visible = NO. Then turn the layers back on one by one and see how that affects the framerate.
Finally you should know that Cocos2D's tilemap implementation is utterly inefficient past a certain number of tiles. There have been attempts to improve the tilemap renderer, for example this one (HKTMXTiledMap) may be worth giving a shot.
I had the same problem. My solution is simply convert .png files to .pvr.ccz, and both the file size and in memory footprint reduce dramatically. Here are my steps:
use TexturePacker to convert tileset files (png files) to pvr.ccz. Make sure it's a 1:1 mapping (same size, no rotation, no border padding, no trim...), and they should be the same size (e.g. 2048 x 2048)
open your .tmx file and change the png file path to your pvr.ccz file.
that's it! It works for cocos2d-x in my case. Before the change my game takes 106 MB in memory, and only ~90MB after the change, and this is only for 1 tileset texture.