How to quickly know layer dimensions in Gimp? - layer

Every time I want to know the layer dimensions in Gimp, I open the "Scale layer" dialog to get it. Is there a better way to know this at a glance? May be some configuration option to show it at the bottom/right of the layer name, or in the bottom bar...
Maybe this could be a Gimp feature request?
Thank you!

GIMP does have ways to configure the status bar (on prefences, image windows, title and status) - but there is currently no way to display the layer size -
It can be made a feature request - on one hand it is an easy task, and someone starting to trying to collaborate with the project might tackle it. Ont he other hand, the project suffers from lack of man power for development, and on the road map, there is already getting rid of "layer dimensions" altogether (in the future they should just expand/contract automatically, with options at export time for fixing sizes when needed). Anyway, it would be worth to reaquest this as a feature at bugzilla.gnome.org
What is possible to do programmatically now, is to write a small python script that would open its own GTK Window with text entry widgets, and set a main loop on the script (python Plug-ins in GIMP run in a separate process, so it is ok for they to have their own main loop) - to call at certain intervals something along:
layer = pdb.gimp_image.get_active_layer(img)
width = layer.width; height = layer.height
And having those values fed to your window. The "img" parameter will be passed when you start the plug-in,a nd you will have to run one instance of it for each working image. (there is no PDB call to get the active image in GIMP).
UPDATE
After the bug request open by the OP, the feature was implemented in the development branch of GIMP and is available as %x and %y codes to be used in the status/title bar in the GIMP git master (edit->preferences->Image Windows->Title & Status). It should be available from GIMP 2.10 onwards.
UPDATE
I found out there is no easy way to get to know the available codes for status bar, short of checking the source code. So I am pasting them here:
%f: base filename
%F: full filename
%p: PDB id
%i: instance
%t: image type
%T: drawable type
%s: user source zoom factor
%d: user destination zoom factor
%z: user zoom factor (percentage)
%D: dirty flag
%C: clean flag
%B: dirty flag (long)
%A: clean flag (long)
%m: memory used by image
%M: image size in megapixels
%l: number of layers
%L: number of layers (long)
%n: active drawable name
%P: active drawable PDB id
%W: width in real-world units
%w: width in pixels
%H: height in real-world units
%h: height in pixels
%u: unit symbol
%U: unit abbreviation
%X: drawable width in real world units
%x: drawable width in pixels
%Y: drawable height in real world units
%y: drawable height in pixels
%o: image's color profile name
%e: view offsets in pixels
%r: view rotation angle in degrees
(Please note that some of those may not available in GIMP 2.8)

Related

A way to keep colors in black&white picture when changing image mode from index color to greyscale?

I took pics like these
The files generated form the camera are bmp.
Problem is i need to load them in a certain program where i noticed works only when in photoshop i go to modes -> greyscale. The images are originally indexed color or RGB Color (which i used to make some adjustments - but end results is like u see in these pictures). But when i change mode to greyscale I notice the pics changing (though not sure because i mainly use the mean value from histogram to measure various areas with the square marquee tool -the mean value changes at around 10 points, but again unsure if i should be using that one or if there is some other way to measure the average value of a color in an area). But the image should remain same since its black&white right?

Turi Create rescales and moves my object annotations coordinates

I created and merged an images SFrame with an Annotations SFrame.
I have verified that the coordinates of the annotation boxes matches the location of the features measured in Photoshop.
However the models I create are non-functional, so I explored the merged data set with
data['image_with_ground_truth'] =
tc.object_detector.util.draw_bounding_boxes(data['image'], data['annotations'])
and I find that all the annotations are squashed in the top-left corner in Turi Create despite them actually being widely distributed on the source image as in the second image. The annotations list column shows the coordinates get read correctly into TC, but are mapped badly into what the model sees as bounding boxes.
Where should I look to find the scaling problem in Turi Create??
the version of ml-annotate I was using output coordinates with different scale factors for each image in set, some close, some off by as much as 3.3x

Is there any way I can enlarge a stimulus in #psychopy without losing image quiality?

I'm importing my stimulus from a folder. I would like to make them bigger *the actual image size is 120 pix (height) x 170 pix (width). I've tried to double the size by using this code in the PsychoPy Coder:
stimuli.append(visual.ImageStim(win=win, name='image', units='cm', size= [9, 6.3],
(I used the double number in cms) but this distorts the image. Is it any way to enlarge it without it distorting, or do I have to change the stimuli itself?
Thank you
Just to answer what Michael said in the comment: no, if you scale an image up, the only way of guessing what is in between pixels is interpolation. This is what psychopy does and what ANY software would do. To make an analogy: take a picture of a distant tree using your digital camera. Then scale the image up using all kinds of software. You won't suddenly be able to see the individual leaves since the software had no such information as input.
If you need higher resolution, put higher resolution images in your folder. If it's simple shapes, you may use built-in methods such as visual.ShapeStim and it's variants: visual.Polygon, visual.Rect and visual.Circle. Psychopy can scale these shapes freely so they always stay sharp.

Lock the SWF stage size

how do i lock the stage size of the SWF file, so that no matter how the user stretches the player, it wont expose what is outside the stage (e.g. enemy spawn points, the stage's background, etc.).
thats dependent on the size you specify in your html embed. give it fixed sizes instead of percentages on the embed, or better yet use swfobject to embed your swf
http://livedocs.adobe.com/flex/3/html/help.html?content=wrapper_13.html
http://code.google.com/p/swfobject/
or you could just mask your stage area so that anything revealed outside is masked out.

resolution from a PDFPage?

I have a PDF document that is created by creating NSImages with size in 72dpi pts, each has a single representation which is measured in pixels. I then put these images into PDFPages with initWithImage, and then save the document.
When I open the document, I need the resolution of the original image. However, all of the rectangles that PDFPage gives me are measured in points, not pixels.
I know that the information is in there, and I suppose I can try to parse the PDF data myself, by going through the voyeur.app example... but that's a WHOLE lot of effort to do something that should be pretty normal...
Is there an easier way to do this?
Added:
I've tried two techniques:
get the PDFRepresentation data from
the page, and use it to make a new
NSImage via initWithData. This
works, however, the image has both
size and pixel size in 72dpi.
Draw the PDFPage into a new
off-screen context, and then get a
CGImage from that. The problem is
that when I'm making the context, it
appears that I need to know the size
in pixels already, which defeats
part of the purpose...
There are a few things you need to understand about PDF:
The PDF Coordinate system is in
points (1/72 inch) by default.
The PDF Coordinate system is devoid of resolution. (this is a white lie - the resolution is effectively the limits of 32 bit floating point numbers).
Images in PDF do not inherently have any resolution attached to them (this is a white lie - images compressed with JPEG2000 still have resolution in their embedded metadata).
An Image in PDF is represented by an object that contains a series of samples that are stored using some compression filter.
Image objects can be rendered on a page multiple times at any size.
Since resolution is defined as the number of pixels (or samples) per unit distance, resolution only means something for a particular rendering of an image on a page. So if you are rendering a particular image to fill the page, then the resolution in dpi is
xdpi = image_width / (pageWidthInPoints / 72.0);
ydpi = image_height / (pageHeightInPoints / 72.0);
If the image is not being rendered to the full size of the page, a complete solution is very tricky. Adobe prescribes that images should be treated as being 1x1 and that you change the page transformation matrix to determine how to render them. The means that you would need the matrix at the point of rendering the image and you would need to push the points (0,0), (0, 1), (1,0) through the matrix. The Euclidean distance between (0, 0)' and (1, 0)' will give you the width in points and the Euclidean distance between (0, 0)' and (0, 1)' will give you the height in points.
So how do you get that matrix? Well, you need the content stream for the page and you need to write a PDF interpreter that can rip the content stream and keep track of changes to the CTM. When you reach your image, you extract the CTM for it.
To do that last step should be about an hour with a decent PDF toolkit, provided you are familiar with the toolkit. Writing that toolkit is several person years of work.