I'm using matplotlib to generate some composite figures (from raw data and images). I'm trying to get the script to take image files of a few file formats, which are then plotted via:
Nxy = mpimg.imread(Nxy_filename)
imgplot = ax1.imshow(Nxy)
where ax1 is the subplot I want the image to show up in. This works fine for both PNG and JPEG images, but for a .bmp (of the same image) matplotlib seems to turn it blue, i.e.
turns into:
in my composite figure. On the other hand, the png and jpg files look exactly the same as the original. Any idea why this would happen? I'm reluctant to blindly alter the color map in the code since the other image formats appear as expected.
It sounds like your PNG and JPEG images are RGB images that happen to be grey while the BMP image is grey scale. Check the shape of Nxy. My guess is it's two dimensional for the BMP while the PNG and JPEG image arrays have three dimensions.
Related
I'm trying to overlay two transparent images with matplotlib and save the result, but the result looks different depending on the file type. Specifically it's much more washed-out when saving to svg.
Here's an example. In this case, I could just add the two images before displaying them, but this is just a simple example. In reality what I'm trying to do is more complicated (images of different sizes with different colormaps), so they have to be plotted separately.
Example code:
f, ax = plt.subplots(figsize=(2,2))
ax.imshow(np.eye(3), alpha=.5)
ax.imshow(np.eye(3)[::-1], alpha=.5)
f.savefig('example.png')
f.savefig('example.svg')
The png file looks just like it does on the screen, but the svg file looks washed out. I would like to know how to save as svg, without the washed-out effect (i.e. it should look like it does on the screen).
As a bonus question, why does the png plot appear different depending on the order in which I plot the transparent images? The second image always looks stronger. Interestingly, in the svg, both are equally washed out.
Example saved as png:
Example saved as svg:
matplotlib version: 3.1.3
python version: 3.7.7
Thanks for any tips!
I'll post what I think is going on, but if someone can answer with more legit information I'll accept it.
I think that every time you call imshow with an alpha value, it blends the current image in the axis with the new image, using (new * alpha + current * (1-alpha)). The problem with this is that if you display 10 images each with alpha 0.5, then the first image is attenuated to nothing by the iterative blending, whereas the last image gets to be 50% of the final result. Nonetheless this is apparently the method used for rendering to the screen and saving to png.
In contrast, when saving to svg, it saves each image as a separate overlay with its own alpha. The svg container or renderer then uses some more intelligent method that considers all overlaid images at once. However, in my particular case, this leads to a more washed-out look because all the images are partially transparent.
Can anyone tell why the image in this pdf does not display as 100% Cyan?
clrtestc - NOPREBLEND32.PDF
Warning: I probably know just enough about pdf and colour to be dangerous!
I'm pretty sure each colour plane of the image is in a separate image. Here's a blended version if that helps.
I know the ColorSpace is DeviceCMYK
I'm pretty sure there is only 100% Cyan in the image, at least there was when it went into the PDF converter.
What went in:
CMYK: 100,0,0,0
RGB: 0,255,255
What I measure coming out:
CMYK: 100,27,0,6
RGB: 0,173,238
I'm foxed! Is there some filter affecting the rendering of the PDF?
There's also Magenta, Yellow and Black versions if they help.
Any help much appreciated.
The PDF file is extraordinarily complicated, it has numerous Forms, some of them nested, most of which are empty. However there only appears to be one image, which is defined in an Indexed CMYK space. So as far as I can see, this is indeed a 100% cyan image.
The extended graphics state does use the Multiply Blend mode, and there is no group and no page group specified, so the colour space used for the blending will depend on the colour model of the output device. If that's a monitor, then it's entirely possible that the resulting output will be RGB.
That's because your CMYK image needs to be converted to RGB in order to be blended using that colour space.
Incidentally, the image is in an Indexed colour space. In your image all the image samples have the same value, that value is then consulted in a lookup table, and that table returns the CMYK components. So no, there is not one image per colour plane, or at least, not in this file.
To be honest, you're going to have to explain better how you are evaluating the content of the PDF file. As far as I can see the image is 100% cyan, and when rendered to a CMYK device, it will remain 100% cyan. If you render to an RGB device, it will be converted to RGB. A poor quality PDF consumer might decide to convert to RGB in the absence of a defined colour space for the blending operation.
Since the blending mode doesn't actually do anything (there's no defined alpha, SMask or any other transparency in the file) you could remove that and see if it sorts out your problem.
Edit
Your screen will be an RGB device, so no matter what the CMYK values in the PDF file are, there won't be any CMYK in the screenshot. The PDF rendering engine will have to convert the CMYK to RGB.
So the PDF rendering engine performs an opaque CMYK->RGB conversion. Then you take a picture of that RGB screen. You load that into an image editing application, and ask it what the RGB values are and presumably what it thinks are the CMYK equivalents.
If the CMYK->RGB calculation that the PDF viewer performs is not the inverse of the calculation that the RGB->CMYK image application performs, then you won't be getting the right values!
There's no way to predict what the RGB intermediate values 'should' be, because there is no 'right' answer here. Fundamentally this isn't a reliable technique for evaluating the colour.
It's hard to make any kind of recommendation without knowing what you are trying to achieve (and possibly why), and what tools you are prepared to use. I believe Acrobat Pro would allow you to look at the colour values directly for example. Or you could use something like Ghostscript to create a CMYK TIFF file, then open that in an image application which supports CMYK (like Photoshop) and look at the values there.
But rendering to the screen, taking a screenshot and trying to figure out what the CMYK values might or might not have been is not really going to work.
I have scanned my copybook and want to crop out extra white regions with Inkscape.
To achieve this, I import initial image (PDF) to Inkscape, draw appropriate rectangle, and use Object->Clip->Set to cut out needed region. Then I resize page to drawing and save obtained page as new PDF file through File->Save a Copy.
I expected that the size of the new PDF file (with cropped image) will be less than the size of the initial PDF (with image without crop), but they are the same.
What is the reason of this and may it be worked around?
I use Inkscape 0.91 at Linux Mint 18.2.
Thank you in advance.
Because the original image is still there, fully intact and with all its contents. The cropping rectangle are just instructions to the PDF viewer to crop out those regions when rendering the image.
However in Inkscape you can bake the crop rectangles and when exporting to PDF "apply raster effects" which should actually alter the contained image(s).
Im rendering a pdf using pdf js library. There I can specify zoom (scale) property. Which is fine. I can define pretty high zoom , let's say 8x and still get decent quality of the rendered pdf. However if I were to try to same pdf but converted to graphic image format like jpeg. And then try to render it with high zoom the quality is very bad. Why is that so?
You are describing the difference between vector graphics and raster graphics. A vector graphic format contains contains commands telling how to draw an image. A raster format is an array that tells what the color is at each position in the image.
PDF is largely a raster format (Yes, you can embed a raster image in a PDF). A PDF that has in instruction to draw a line or draw a character can be zoomed to any degree and the drawing will be correct.
In a raster format, if you zoom, eventually you see the individual pixels in the array and they cannot be zoomed any more without distortion. Text in a JPEG or PNG file becomes jagged as you zoom.
On the other hand, try to create a photographic quality image just with drawing commands and you would get huge files.
I'm trying to use dcraw on a color image (e.g.CR or NEF) to extract raw monochrome data for image processing.
With parameters -4 -D -c I get an image with a checkerboard as shown below:
When unzoomed, the image data is correct, except for the checkboard pattern in all images from different cameras.
The above image was produced using -T and zooming in the resulting .tiff file in File Viewer Plus. In practice, I'm reading the .pgm file directly and getting the same checkboard.
What aren't I understanding? Does this have something to do with Bayer filtering?
Yes, this is due to Bayer filtering and no demosaicing. For example, Green areas will have green pixels brighter than red according to the Bayer pattern, whereas red areas will have green pixels dark.
To get some kind of correct grayscale (or color) image, intensity has to be weighed over a 2x2 area (in standard Bayer). What you are looking for cannot be achieved without the demosaicing step.
Your best bet is to extract a color image, then turn it into grayscale.