Is there a way to set the size of an EPS image which is embedded in a PostScript? - size

I have an EPS image (Encapsulated PostScript) and I embedded it in a PostScript. I'd like to set it's size for example in millimeters like 50x50 or something. I found a way to resize it with the scale keyword like
.7 .7 scale
but that way I only can give a rate to it not a concrete size.
Is there a way to do so?

PostScript is device independent, so you aren't supposed to try and set things to a specific size in terms of output. This allows devices to use differently sized media (for example including hardware margins), or different resolutions, and still produce the desired output.
Bearing in mind that EPS files are intended for use by applications, that will embed the EPS in their own output, its important that the EPS itself be scalable. The application can read the BoundingBox from the EPS file, and then scale that into its own co-ordinate system so that the EPS fits a specific size at a particular position on the output.
Basically, you need to work out what scale factor to use, by taking the EPS BoundingBox and working out what scale factor will fit that into the area you want it to cover on your output.

Related

Vulkan swapchain format UNORM vs SRGB?

In a Vulkan program, fragment shaders generally output single-precision floating-point colors in the range 0.0 to 1.0 to each red/blue/green channel, and these are then written to (blended into) the swapchain image that is then presented to screen. The floating point values are encoded into bits according to the format of the swapchain image (specified when the swapchain is created).
When I change my swapchain format from VK_FORMAT_B8G8R8A8_UNORM to VK_FORMAT_B8G8R8A8_SRGB I observe that the overall brightness of the frames is greatly increased, and also there are some minor color shifts.
My understanding of the SRGB format was that it was a lot like the UNORM format just having a different mapping of floating point values to 8-bit integers, such that it had higher color resolution in some areas and less in others, but the actually meaning of the "pre-encoded" RGB floating-point values remained unchanged.
So I'm a little suprised about the brightness increase. Is my understanding of SRGB encoding wrong? and/or is such a brightness increase is expected vs UNORM?
or maybe I have a bug and a brightness increase is not expected?
Update:
I've observed that if I use SRGB swapchain images and also load my images/textures in VK_FORMAT_B8G8R8A8_SRGB format rather than VK_FORMAT_B8G8R8A8_UNORM then the extra brightness goes away. It looks the same as if I use VK_FORMAT_B8G8R8A8_UNORM swapchain images and load my images/textures in VK_FORMAT_B8G8R8A8_UNORM format.
Also, if I put the swapchain image into VK_FORMAT_B8G8R8A8_UNORM format and then load the images/textures with VK_FORMAT_B8G8R8A8_SRGB, the frames look extra dark / almost black.
Some clarity about what is going on would be helpful.
This is a colorspace and display issue.
Fragment shaders are assumed to be writing values in a linear RGB colorspace. As such, if you are rendering to an image that has a linear RGB colorspace (UNORM), the values your FS produces are interpreted directly. When you render to an image which has an sRGB colorspace, you are writing values from one space (linear) into another space (sRGB). As such, these values are automatically converted into the sRGB colorspace. It's no different from transforming a position from model space to world space or whatever.
What is different is the fact that you've been looking at your scene incorrectly. See, odds are very good that your swapchain's VkSurfaceFormat::colorSpace value is VK_COLOR_SPACE_SRGB_NONLINEAR_KHR.
VkSurfaceFormat::colorspace tells you how the display engine will interpret the pixel data in swapchain images you present. This is not a setting you provide; this is the display engine telling you something about how it is going to interpret the values you send it.
I say "odds are very good" that it is sRGB because, outside of extensions, this is the only possible value. You are rendering to an sRGB display device whether you like it or not.
So if you write to a UNORM image, the display device will read the actual bits of data and interpret them as if they are in the sRGB colorspace. This operation only makes sense if the data your fragment shader wrote itself is in the sRGB colorspace.
However, that's generally not how FS's generate data. The lighting computations you compute only make sense on color values in a linear RGB colorspace. So unless you wrote your FS to deliberately do sRGB conversion after computing the final color value, odds are good that all of your results have been in a linear RGB colorspace. And that's what you've been writing to your framebuffer.
And then the display engine mangles it.
By using an sRGB image as your destination, you force a colorspace conversion from linear RGB to sRGB, which will then be interpreted by the display engine as sRGB values. This means that your lighting equations are finally producing the correct results.
Failure to do gamma-correct rendering properly (including the source texture images which are almost certainly also in the sRGB colorspace, as this is the default colorspace for most image editors. The exceptions would be for things like gloss-maps, normal maps, or other images that aren't storing "colors".) leads to cases where linear light attention appears more correct than quadratic attenuation, even though quadratic is how reality works.
This is gamma correction.
Using a swapchain with VK_FORMAT_B8G8R8A8_SRGB leverages the ability to to apply gamma correction as the final step in your render pipeline. This happens for you automatically behind the scenes.
That is the only place you want gamma correction to happen. Make sure your shaders are not applying gamma correction. You might see it as:
color = pow(color, vec3(1.0/2.2));
If your swapchain does the gamma correction, you do not need todo it in your shaders.
Most images are SRGB (pictures, color textures, etc). Linear images are for specific data, like a blue noise texture or heightmap.
Load SRGB images w/ VK_FORMAT_R8G8B8A8_SRGB
Load LINEAR images w/ VK_FORMAT_R8G8B8A8_UNORM
No shader conversion is required if the rules outlined above are followed.

What is the highest quality export for Spotfire visualizations?

I have a question regarding high quality exports from spotfire to PDF.
I read on Spotfire support page that to obtain highest quality exports for visualizations you should select vectors (instead of rastar). They still provide better quality than rastar graphics with maximum quality (5 out of 5). However, when I export these images to PDF, the quality is relatively low. Is there a way I can increase the quality? Would it help to select e.g. PPT exports? I think manual screenshots are still better in quality, but more timeconsuming. We are looking for end-user friendly interface.
Furthermore, if your table is longer and you use slider, what is the recommended way of exporting such a graphic?
Thanks a lot.
raster graphics are generally lower quality. jpeg, for example, is a raster format, where each pixel in the image is coded to a specific color; if you resize the image, it becomes blurry or loses detail.
vector format defines points, lines, and shapes that make up an image. when you scale the image up or down, there is no guesswork trying to blend pixels -- instead the points are recalculated to whatever size. for example, fonts are usually vector format, which is what allows them to be scaled up or down to any size.
the quality of the vector image may be low due to the zoom settings on your PDF viewer (although the image can be scaled to any size, your screen still relies on pixels to display it), but it is the most possible detail. if you zoom in you will probably see an increase in quality.
manual screenshots probably look "best" because there is no scaling or resizing involved. you can export to PNG (raster) image format to get the same effect.
you will not see an increase in quality by exporting to Powerpoint. either you will export it as a raster image to a PPT slide (default settings) or you will export it as a vector using the "As Editable Image" checkbox (which allows you to modify the image in PPT).
what is your end goal? Spotfire is best suited for viewing in the Desktop or Web Player applications, otherwise you lose a lot of features like interactivity, (potentially) live date. if you have to make a lot of exports maybe it is easier to simply provide a link to the analysis?
to your final question, Spotfire is not very good at table exports. I have had some luck with increasing the page size (A4 or A5, for example) and using Landscape over Portrait. again, I recommend to view it in the application.

Is there any way I can enlarge a stimulus in #psychopy without losing image quiality?

I'm importing my stimulus from a folder. I would like to make them bigger *the actual image size is 120 pix (height) x 170 pix (width). I've tried to double the size by using this code in the PsychoPy Coder:
stimuli.append(visual.ImageStim(win=win, name='image', units='cm', size= [9, 6.3],
(I used the double number in cms) but this distorts the image. Is it any way to enlarge it without it distorting, or do I have to change the stimuli itself?
Thank you
Just to answer what Michael said in the comment: no, if you scale an image up, the only way of guessing what is in between pixels is interpolation. This is what psychopy does and what ANY software would do. To make an analogy: take a picture of a distant tree using your digital camera. Then scale the image up using all kinds of software. You won't suddenly be able to see the individual leaves since the software had no such information as input.
If you need higher resolution, put higher resolution images in your folder. If it's simple shapes, you may use built-in methods such as visual.ShapeStim and it's variants: visual.Polygon, visual.Rect and visual.Circle. Psychopy can scale these shapes freely so they always stay sharp.

Re-sizing visual image while maintaining image dimensions

I'm working with documents, so maintaining the the original image dimensions and subsequent dpi is important.
The aspect ratio is always maintained so the automatic fill modes and alike don't seem to have any effect.
Say I have a 300 dpi document and the user want to clear an inch border around the image. So I need an inch cropped from the image but the result needs to be the original image dimensions (2550x3300).
I have been able to achieve this effect with...
...&crop=300,300,-300,-300&margin=300,300,300,300
This works, but seems more than a little clunky. I've tried a lot of other combinations but they all seem to enlarge or reduce the image size which is undesirable in my case.
So does someone know a simpler syntax to achieve the desired result, or do I need to re-size the image then calculate and fill with a margin as I'm doing now.
Thanks
It turns out that my example requests the image in it's full size which turns out to be a special case. When I introduce a width or height into the command line things don't work very well since crop size is in respect to the original image dimensions and margin size is in respect to the result image.
Thinking about it more I abandoned the crop approach. What I really needed was a way to introduce a clipping region into the result bitmap. So I built an extension to do just that. It works well as it doesn't interfere with any of Resizer's layout calculations and the size of the returned image is whatever the height or width were specified as. Which is just what I needed. The Faces plugin has an example of introducing a clipping region.
Karlton
Cropping and re-adding 300px on each edge is best accomplished exactly the way you're doing it:
&crop=300,300,-300,-300&margin=300
What kind of improved syntax would you expect? This isn't a common operation.

resolution from a PDFPage?

I have a PDF document that is created by creating NSImages with size in 72dpi pts, each has a single representation which is measured in pixels. I then put these images into PDFPages with initWithImage, and then save the document.
When I open the document, I need the resolution of the original image. However, all of the rectangles that PDFPage gives me are measured in points, not pixels.
I know that the information is in there, and I suppose I can try to parse the PDF data myself, by going through the voyeur.app example... but that's a WHOLE lot of effort to do something that should be pretty normal...
Is there an easier way to do this?
Added:
I've tried two techniques:
get the PDFRepresentation data from
the page, and use it to make a new
NSImage via initWithData. This
works, however, the image has both
size and pixel size in 72dpi.
Draw the PDFPage into a new
off-screen context, and then get a
CGImage from that. The problem is
that when I'm making the context, it
appears that I need to know the size
in pixels already, which defeats
part of the purpose...
There are a few things you need to understand about PDF:
The PDF Coordinate system is in
points (1/72 inch) by default.
The PDF Coordinate system is devoid of resolution. (this is a white lie - the resolution is effectively the limits of 32 bit floating point numbers).
Images in PDF do not inherently have any resolution attached to them (this is a white lie - images compressed with JPEG2000 still have resolution in their embedded metadata).
An Image in PDF is represented by an object that contains a series of samples that are stored using some compression filter.
Image objects can be rendered on a page multiple times at any size.
Since resolution is defined as the number of pixels (or samples) per unit distance, resolution only means something for a particular rendering of an image on a page. So if you are rendering a particular image to fill the page, then the resolution in dpi is
xdpi = image_width / (pageWidthInPoints / 72.0);
ydpi = image_height / (pageHeightInPoints / 72.0);
If the image is not being rendered to the full size of the page, a complete solution is very tricky. Adobe prescribes that images should be treated as being 1x1 and that you change the page transformation matrix to determine how to render them. The means that you would need the matrix at the point of rendering the image and you would need to push the points (0,0), (0, 1), (1,0) through the matrix. The Euclidean distance between (0, 0)' and (1, 0)' will give you the width in points and the Euclidean distance between (0, 0)' and (0, 1)' will give you the height in points.
So how do you get that matrix? Well, you need the content stream for the page and you need to write a PDF interpreter that can rip the content stream and keep track of changes to the CTM. When you reach your image, you extract the CTM for it.
To do that last step should be about an hour with a decent PDF toolkit, provided you are familiar with the toolkit. Writing that toolkit is several person years of work.