MacOS 10.7.4 comes with new icons having image reps at 144 DPI. The bad thing is that when I load one of these icons in a NSImage I only get reps having a size of 512px. I mean: I load a 1024px/144dpi icns file in a NSImage and then I ask every image rep for its size... no rep has a size of 1024px, I only get sizes with a max. of 512px (no matter if a rep has a resolution of 72dpi rather than 144dpi: in fact new icons in 10.7.4, like TextEdit or Automator, have reps for both resolutions for each size except for 1024px which exists in a single rep at 144dpi).
Whay does NSImageRep seem like it doesn't understand its real resolution? Why do I get this issue just for 1024px/144dpi and not, for example, for 512px/144dpi?
If I read the TIFFRepresentation of the NSImage and I write it back to a file I get a correct 1024px/144dpi TIFF file, while if I write the same NSImage going through CGImageSource/CGImageDestination as kUTTypeTIFF I get a 1024px/72dpi file.
All these things are making me getting very confused.
Thanks a lot
The docs for -[NSImageRep size] say:
The size of the image representation, measured in points in the user coordinate space.
(Emphasis added.)
This is not a measurement in pixels. It's a measurement in points, so an image that is 1024 pixels at 144 dpi measures 512 points when the points are 72 dpi.
You want to query the -pixelsWide and -pixelsHigh methods (if, indeed, you care about the pixel dimensions; often you should not).
Related
I'm importing my stimulus from a folder. I would like to make them bigger *the actual image size is 120 pix (height) x 170 pix (width). I've tried to double the size by using this code in the PsychoPy Coder:
stimuli.append(visual.ImageStim(win=win, name='image', units='cm', size= [9, 6.3],
(I used the double number in cms) but this distorts the image. Is it any way to enlarge it without it distorting, or do I have to change the stimuli itself?
Thank you
Just to answer what Michael said in the comment: no, if you scale an image up, the only way of guessing what is in between pixels is interpolation. This is what psychopy does and what ANY software would do. To make an analogy: take a picture of a distant tree using your digital camera. Then scale the image up using all kinds of software. You won't suddenly be able to see the individual leaves since the software had no such information as input.
If you need higher resolution, put higher resolution images in your folder. If it's simple shapes, you may use built-in methods such as visual.ShapeStim and it's variants: visual.Polygon, visual.Rect and visual.Circle. Psychopy can scale these shapes freely so they always stay sharp.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I am getting image size from a link on the internet. the image actual size is 630x420 , but when i print the image size i get 129x86 . (image.size.width and height )
When i log the NSImage , i can see there is pixels field with the right size, which different than 'size' field. so, why is that , and how do i get the pixels size ?
I know that in UIImage, the size is the actual size of an image .
<NSImage 0x600000260fc0 Size={129.59999999999999, 86.399999999999991} Reps=(
"NSBitmapImageRep 0x6000000b0b00 Size={129.59999999999999, 86.399999999999991} ColorSpace=(not yet loaded) BPS=8 BPP=(not yet loaded) Pixels=630x420 Alpha=NO Planar=NO Format=(not yet loaded) CurrentBacking=nil (faulting) CGImageSource=0x600000171c40"
The size of an image in Cocoa, just like the size of most everything else, is measured in points. These are idealized coordinates that are assumed to be 1/72nd of an inch.
You may think that images are measured in pixels, and some are, but they can also have "real world", "physical" dimensions. An image has physical dimensions if its metadata describes the relationship between its pixels and some real world unit of length. A common way to express that relationship is DPI, dots per inch.
The image data you obtain from the net has a DPI stored in its metadata. It seems that it has a DPI of 350. 630 pixels at 350 DPI is 1.8 inches. 420 pixels at 350 DPI is 1.2 inches. So, the image is 1.8 inches x 1.2 inches in size.
Cocoa then expresses this in points. 1.8 inches * 72 DPI = 129.6. 1.2 inches * 72 DPI = 86.4. Those are the values you are getting from NSImage, within the ability of floating point numbers to represent them.
Cocoa is correct to consider this image as 129.6 points wide and 86.4 points high. If it didn't, then the image would draw at the wrong size on screen or when printing, given its DPI.
For the general case, you must also recognize that NSImage is a wrapper around image representations. An NSImage is just "a thing that can draw itself", not anything more specific than that. There may be multiple representations for any given image. Also, not all representations are bitmaps. Some may be vector graphics (e.g. PDF). Others may be procedural, using code to draw themselves. NSBitmapImageRep is the class which represents bitmap images.
Ok , seems that with NSImage, size is not the actual size but relative to screen .
To get the real size :
NSImageRep *rep = [[image representations] objectAtIndex:0];
NSSize imageSize = NSMakeSize(rep.pixelsWide, rep.pixelsHigh);
I have an app that creates a large bitmap and later the user can add some labels. Everything works great as long as the base bitmap is the default 96x96 resolution. If I bump it up to 300 for instance, then the text applied with Graphics.DrawString is much too large - a petite size 8 or 10 font displays like it is 20.
On the one hand, it makes sense given the resolution increase, but on the other, you'd think the Fonts would scale. MeasureString returns a larger size when measured on a 300 vs 96 dpi bitmap, which wasn't really what I expected.
I've tried tricking it by creating a small bitmap of the appropriate size, printing to it, then pasting that to the master image. But when pasted to the high res it enlarges the pasted image.
The only other thing I can think of is to create a high res temp bitmap, print to it, then shrink it before pasting to the main image. That seems like a long way to go. Is there a compositing or overlay type setting that allows this? Are font sizes only true for a 96 dpi canvas?
Thanks for any hints/advice!
The size of a font is expressed in inches. One point is 1/72 inch. So if you draw into a bitmap that has 300 dots-per-inch then your font is going to use a lot more dots for the requested number of inches. So when you display it on a 300 dpi display then you'll get the size in inches back that you asked for.
Problem is, you are not displaying it a 300 dpi device, you are displaying it on a 96 dpi device. So it looks much bigger.
Clearly you don't really want a 300 dpi bitmap. Or you want to draw it three times smaller. Take your pick.
If you want a consistent size in pixels, specify UnitPixel when creating your Font object.
My cocos2d-iphone game has tiled maps. The tilesets textures are rather big. I got around 5 tilesets and each one is 2048x2048 (retina).
My maps are around 80x80. They have around 8 layers and each one is obviously using one tileset.
The frame rate falls (it goes around 30 sometimes. I know 30 is rather aceptable, but still, I want 50+).
So given that textures are huge I can't afford to make many layers (since each one loads a texture of these).
So how about I divide my tileset textures into much smaller tilesets (like 1024x1024 each)? That will allow me to use many more layers for my maps, right?
Are there any other tips for huge retina display tile maps?
Texture with 2048x2048 and 32-Bit colors equals 16 MB (!) of memory. Five times that is 80 MB of memory just for the textures. Ouch! For a tilemap that is relatively tiny (80x80) that's an enormous amount of texture memory.
First order of optimization would be to use PVR textures if you really can't reduce the number of tilesets or images within them. You lose some image quality but the memory consumption will go down dramatically, and the rendering performance of PVR textures is a lot better. Of course while working with Tiled you'll have to use the (presumably) PNG texture which you then convert to PVR for use in the project, for example using TexturePacker.
8 tile layers can be pretty hefty, but depends on how you use them and how many tiles of each layer are actually drawn. Try this: set all but one layer to visible = NO. Then turn the layers back on one by one and see how that affects the framerate.
Finally you should know that Cocos2D's tilemap implementation is utterly inefficient past a certain number of tiles. There have been attempts to improve the tilemap renderer, for example this one (HKTMXTiledMap) may be worth giving a shot.
I had the same problem. My solution is simply convert .png files to .pvr.ccz, and both the file size and in memory footprint reduce dramatically. Here are my steps:
use TexturePacker to convert tileset files (png files) to pvr.ccz. Make sure it's a 1:1 mapping (same size, no rotation, no border padding, no trim...), and they should be the same size (e.g. 2048 x 2048)
open your .tmx file and change the png file path to your pvr.ccz file.
that's it! It works for cocos2d-x in my case. Before the change my game takes 106 MB in memory, and only ~90MB after the change, and this is only for 1 tileset texture.
I have a PDF document that is created by creating NSImages with size in 72dpi pts, each has a single representation which is measured in pixels. I then put these images into PDFPages with initWithImage, and then save the document.
When I open the document, I need the resolution of the original image. However, all of the rectangles that PDFPage gives me are measured in points, not pixels.
I know that the information is in there, and I suppose I can try to parse the PDF data myself, by going through the voyeur.app example... but that's a WHOLE lot of effort to do something that should be pretty normal...
Is there an easier way to do this?
Added:
I've tried two techniques:
get the PDFRepresentation data from
the page, and use it to make a new
NSImage via initWithData. This
works, however, the image has both
size and pixel size in 72dpi.
Draw the PDFPage into a new
off-screen context, and then get a
CGImage from that. The problem is
that when I'm making the context, it
appears that I need to know the size
in pixels already, which defeats
part of the purpose...
There are a few things you need to understand about PDF:
The PDF Coordinate system is in
points (1/72 inch) by default.
The PDF Coordinate system is devoid of resolution. (this is a white lie - the resolution is effectively the limits of 32 bit floating point numbers).
Images in PDF do not inherently have any resolution attached to them (this is a white lie - images compressed with JPEG2000 still have resolution in their embedded metadata).
An Image in PDF is represented by an object that contains a series of samples that are stored using some compression filter.
Image objects can be rendered on a page multiple times at any size.
Since resolution is defined as the number of pixels (or samples) per unit distance, resolution only means something for a particular rendering of an image on a page. So if you are rendering a particular image to fill the page, then the resolution in dpi is
xdpi = image_width / (pageWidthInPoints / 72.0);
ydpi = image_height / (pageHeightInPoints / 72.0);
If the image is not being rendered to the full size of the page, a complete solution is very tricky. Adobe prescribes that images should be treated as being 1x1 and that you change the page transformation matrix to determine how to render them. The means that you would need the matrix at the point of rendering the image and you would need to push the points (0,0), (0, 1), (1,0) through the matrix. The Euclidean distance between (0, 0)' and (1, 0)' will give you the width in points and the Euclidean distance between (0, 0)' and (0, 1)' will give you the height in points.
So how do you get that matrix? Well, you need the content stream for the page and you need to write a PDF interpreter that can rip the content stream and keep track of changes to the CTM. When you reach your image, you extract the CTM for it.
To do that last step should be about an hour with a decent PDF toolkit, provided you are familiar with the toolkit. Writing that toolkit is several person years of work.