Using EDSDK, I want to programmatically set the white balance (RGGB) values of the LiveView stream, and also for the white balance in both JPG (and RAW) images coming from the cam directly. The process of manual white balancing liveview and off-camera images is not completely clear to me and is not really clear in the EDSDK manual.
Through trial and error, I worked my way through calibrating LiveView by issuing the kEdsCameraCommand_DoClickWBEvf command with coordinates on a grey card. This seems to affect liveview all right:
Liveview switches to "ClickWB" (-1) white balance setting
Camera settings remain unchanged: it doesn't change the as-shot values of the camera.
Note that the "manual WB" icon on the camera disappears when setting to "ClickWB", something appears to be wrong.
Apparently, Canon's EOS utility does things slightly different. Using some tracing and polling of PTP events I see that:
Clicking Whitebalance sends a similar ClickWB command to camera.
When clicking "Apply to shot images" sends a command to camera
The camera White Balance stays on value 6 ("Manual","White Point" or "White Paper" depending on the context).
Liveview is also affected as it switches to 6.
Trace shows evidence of "CPtpCamera::TranslateMWb" command, as if there is a command to set the user balance.
The 'raw' White Balance coefficients can apparently be retrieved as EOS displays a warning about the coefficients not being ok.
For RAW images, I worked around white balancing by storing white balance coefficients from a RAW of grey card, and re-applying these coefficients when converting a new image (without grey card) to TIFF. This does not affect the on-camera JPGs, as-shot White Balance and cannot be recovered after reset.
I am stuck when disconnecting/reconnecting camera and (programmatically) apply the previously calibrated or stored WB values. Is this possible, and if so, how do I copy the original white balance values. Anyone here who has experience in manual WBing with EDSDK, care to share the type/order of sharing?
Note:
Canon provides no official technical support whatsoever for the EDSDK
older SDKs were reported to include commands (e.g. in 2.5 kEdsPropID_UserWhiteBalanceData). There must be a replacement for this?
--- update Dec 17 2014 ---
I am currently (indirectly) in "official" contact with Canon's EDSDK developers and currently there is no official way of setting in-camera custom white balance through the EDSDK.
Related
I know the output image size requirement is 1440 pixels by 810 pixels #72 dpi.
Problem incurred lots of software and services lock the 16x9 ratio but do not output a standard size.
I want to beable to have the cutting box to zoom in and the cropped image to always be 1440x810#72dpi.
And a feature to move coping box around in the original image would be a nice feature. I just need a working solution that is free temporarily.
Script requires upload of a high quality image and 16x9 cropping box appears over image to crop features out of image and hitting crop would set the box default to 1440x810 so definite cropping box restriction to stop when maximum threshold of conversion is met to not pixelate the output produced image.
Appreciate all the help I can get. Have a wonderful day.
I am in the process of using my android phone and a public computer hence free online service. Normally I would use my photos hope cs6 version but that is not currently available.
I will continue searching but it's like finding a needle in the haystack.
I am hoping another expert has already knows a solution.
I have created a pdf viewer using react-pdf. When I display certain pdfs, the text is choppy and unreadable. I have tried zooming in and out of the document and it is choppy in different ways at different scales. Sometimes the text even looks okay at a certain scale after zooming out and then zooming back in.
(Sample at 1.5 scale)
(Sample at 1.6 scale)
At first, I thought it might be an issue with react-pdf, but I saw that react-pdf is basically a wrapper around PDF.js. I found that I can replicate the issue in the PDF.js demo page.
Unfortunately, I'm working with a pdf that contains identifying information, so I can't share the full pdf or full screenshot. I'll include as much as I can figure out to share.
What I have tried
My initial thought was that maybe the component was rendering small initially and then had issues scaling up. So I made the initial size really large, but that didn't fix it.
I made sure that standard fonts were included following the instructions on the react-pdf home page
I tried using pdf repair tools online to maybe fix the pdf itself. That didn't help.
I tried changing the renderMode to 'svg' as detailed in the Document api documentation. This was the most helpful fix, as it does render the text correctly, but it then makes it so the images on the pdf don't load.
Thanks for your help/suggestions.
If I can find a way to edit the pdf to not have sensitive information, I'll try to find a place to make it available for testing. I apologize that I cannot provide that at this time. I know it's difficult to give advice when you can't replicate it yourself. I'll work on that.
From a programming point of view there is only "Providing a standardFontDataUrl and disabling the font face" (see later), however it affects many pdf.js based code developers outputs, thus I consider as still "OnTopic"
This issue is still open in react-pdf, though I have seen it mentioned by other pdf.js users since mid year (MS or Chrome update ?) , so unsure if it is not a wider fail affecting Mozilla PDF.js code users.
https://github.com/wojtekmaj/react-pdf/issues/1010
https://github.com/wojtekmaj/react-pdf/issues/1025
There semes to be earlier reports back in Early March and then later suggestions to change win 10 drivers. However also reported by win 11 Pro users. PDF.js versions from 2.8.335 to 2.14.305, and it doesn't affect version 2.7.570. so partially down to updated versions ! But seen only in Chromium.
It is entirely possible that we started doing something that trips Chrome,
The symptoms seem to be hardware or settings orientated since it is reportedly seen on some identical groups of users but not affecting others.
toggling back and forth between single page and multi-page views the issue resolves. It also seems dependent on the resolution or appears on some machines and not others so it is a little tricky to repro.
I am not getting it personally, but a guy in my team get it.
Unclear which browsers are affected but looks like its a chromium / web kit rendering bug ?
Several browsers have been tested and only chrome faces this.
My colleague gets the same in Edge Version 101.0.1210.47 (Official build) (64-bit) and Brave (1.38.118 Chromium 101.0.4951.67) Will edit the issue
The suggested workaround is :-
Providing a standardFontDataUrl and disabling the font face fixes the issue.
if we disable Accelerated 2D canvas in chrome://flags then the preview appears nice and okay. But since this flag is on by default so user see the pixelated preview. Unless we ask them to turn off this flag.
Figured out that this only happens when hardware acceleration is enabled in your Chrome settings.
When its turned off the issue does not happen.
In address bar paste chrome://gpu or edge://gpu etc (its a long report of current onboard fixes) in my case (currently unconfirmed via reboot for my Edge) is showing Accelerated 2D canvas is unavailable: either disabled via blocklist or the command line. Disabled Features: 2d_canvas, thus I cannot see problems.
To change setting you can use
chrome://flags/#disable-accelerated-2d-canvas
but its a manual choice between options.
so on reboot I see
Graphics Feature Status
Canvas: Hardware accelerated
Canvas out-of-process rasterization: Disabled
but have little problem with the domo (except normal fuzzy text as pixels) so either Edge update or my hardware is not visibly affected or my default settings are reasonable.
This issue has been finally fixed in the latest version of react-pdf library. Check here: https://github.com/wojtekmaj/react-pdf/releases/tag/v6.2.2
I also faced the same error and I fixed it by setting render mode to canvas (earlier it was SVG) and scale value to more than 1. Try scale = 1.5
I splurged and bought one of those high definition 4K screens. More specifically, the Dell UltraSharp 4k UP3216Q 31.5", combined with a new PC running Windows 10.
When the computer occasionally reboots, it goes into a mode where when I load IntelliJ, it shows the following error message:
8:16 PM You may need to manually configure the HiDPI mode to prevent UI scaling issues. See the troubleshooting guide.
The interesting thing is that when it's running in this mode, I actually like the way IntelliJ looks. I like it because it's running in true sharp 4K mode, and at the same time, all the fonts are large enough to be legible, and not require a magnetic resonance microscope or a monocle to make out the letters.
However, other times, when the system boots up, I do not get that error, meaning everything is functioning normally, but in that case, all the fonts are so tiny as to be illegible. It literally hurts my eyes to look at it, and the only alternatives I have left at that point is to either drop down from 3840x2160 into 1920x1080, or to go into the settings, and start increasing the font sizes, which is annoying. Not to mention that if I drop down into 1920x1080 mode, then the quality of what I am looking at degrades, everything starts looking pixelated...
Is there anything that can be done to stabilize the situation on these new 4K screens so that IntelliJ looks normal?
Try this:
Help > Edit Custom VM Options:
-Dsun.java2d.uiScale.enabled=true
More information can be found here:
https://intellij-support.jetbrains.com/hc/en-us/articles/115001260010-Troubleshooting-IDE-scaling-DPI-issues-on-Windows
If that does not help create a ticket in the JetBrains issue tracker: https://youtrack.jetbrains.com/
They are usually very responsive.
Another possibility is that you have the Windows UI scaling value for the screen set to a non-integral value in display settings. This messed me up, I had the setting to 175%, while the default is 200%. Intellij (and many other applications) will not scale properly if that is set to a non-integral scaling value.
As soon as I switch this back to 200% Intellij scales perfectly.
I fix this problem after setted env variable IDEA_JDK_64 to jdk path in windows 10
I'm trying to get a grasp on the capabilities of the current version of Ghostscript (see also this question that I asked a few days ago). So, I downloaded a "test form" for the PDF/X-4 standard from www.pdfx-ready.ch, a standards organization in Switzerland, and tried to render it... (In case anyone wants to try this, here's the direct download link: http://www.pdfx-ready.ch/files/PDFX-ready-OutputTest_PDFX4-CMYK_V301d.zip. You can find more info on this page (in German): http://www.pdfx-ready.ch/index.php?show=496)
Anyway: I was pleasantly surprised to see that most of the test fields were rendered correctly on screen. Most of the other PDF viewers that I had tried had failed miserably. Then I noticed that there were a few test cases that produced errors:
CMYK Overprint Mode (on page 1) is not respected for fonts and
vectors (it works fine for images, masks and shadings).
Rendering of Knockout Transparency Groups (on page 2) is not performed correctly.
Rendering of a few more fields (on page 4) that had to do with overprinting (Spot to CMYK, CMYK over Spot, Image Overprint etc.) failed.
So, I started experimenting... First I noticed that I still had an old version of Ghostscript installed. So, I compiled the new version 9.16 and tried again. This time, the Knockout Transparency Groups (see above) were rendered correctly. Great!
Then I read here that "the handling of overprinting and spot colors depends upon the process color model of the output device". So, instead of -sDEVICE=x11 I now tried -sDEVICE=x11cmyk. And to my surprise, the errors regarding the CMYK Overprint Mode went away. Unfortunately, the errors on page 4 remained.
What's more, I now have two new problems: First of all, the pages are now rendered in wrong colors. In fact, the white background of the test pages now appears in cyan! Also, it seems, Ghostscript is now trying to simulate some kind of ugly halftoning on screen. I read here again that "The differences in appearance of files with overprinting and spot colors caused by the differences in the color model of the output device [...] are not due to a limitation in the implementation of Ghostscript or its output devices." So, I'm assuming that I'm missing something. But what is it?
Summarizing:
Is there a way (maybe another device, a command line parameter or something) to tell Ghostscript to handle overprinting correctly? Or hasn't this been implemented yet?
What causes the cyan tinting of the white background?
Is there a way to print this correctly to an inkjet, the way it appears on screen? (lpr doesn't seem to work well.)
Thanks in advance.
UPDATE
So, I experimented a lot and read a few discussions. Also, the documentation here, which I found pretty interesting, as it says:
"Ghostscript currently provides overprint simulation for spot
colorants when rendering to the separation devices psdcmyk and
tiffsep. These devices maintain all the spot color planes and merge
these together to provide a simulated preview of what would be
printed."
Alright, this is what #KenS (see below) mentioned in a comment. But then
"It is possible to get a simulated preview of overprinting with other
CMYK devices by specifying -dSimulateOverprint = true/false In this
case, simulated overprinting is achieved through a blending of the
CMYK colorants." [p.9]
Now, I read that as saying that I can use a CMYK device (like tiff32nc) to get a simulated preview of overprinting with spot colors. Am I correct? So, after some more reading here (just in case this has anything to do with CMYK, which I doubt), I finally tried the following:
gs -dBATCH -dNOPAUSE -dSAFER
-dSimulateOverprint=true
-sDefaultCMYKProfile=ISOcoated_v2_300_eci.icc
-sOutputICCProfile=ISOcoated_v2_300_eci.icc
-sDEVICE=tiff32nc
-sOutputFile=out.tif
in.pdf
I even experimented with the options -dOverrideICC, -dRenderIntent and -sProofProfile. Nothing seems to work. What am I misunderstanding here? Is there really no way to render a non-seperated full-color preview of correctly overprinted spot colors?
UPDATE 2
So, I finally tried the tiffsep device (not really, what I would like to achieve, but interesting as a test case) and checked the five files that are produced. And there are still errors! If you would like to check, run the command
gs -dBATCH -dNOPAUSE -dSAFER
-sDEVICE=tiffsep
-dFirstPage=4
-dLastPage=4
-sOutputFile=page4.tif
PDFX-ready_Output-Test_301d_X4.pdf
over the aforementioned PDF/X-4 document. Then examine, e.g., the third test field in the first row in the left column (page 4).
So, I really don't know what to make of this. Does that mean that Ghostscript can't handle overprinting with spot colors at all - contrary to what the documentation says? Is that a bug? Or do I have the command wrong? Am I missing anything?
First answer is stop trying to use the X11 device, its an RGB device and not hugely well supported. In order to do X11CMYK the input must be rendered to CMYK then post filtered to RGB. Its not a good solution.
Overprinting is only defined for CMYK process colours (and spots), any other colour model will not perform overprinting. So I would suggest you render to TIFF or JPEG devices using their CMYK variants.
Spot colours are even more complex, if the device does not support the requested spot colour then it uses the tint transform to convert into the defined alternate colour space. If tint transformation takes place the spot is not overprinted.
Since the display devices cannot support spot colours, you can't preview spot overprinting using a display device. If you want to do this you should use the tiffsep device.
If you believe you have found a bug in Ghostscript, then please report it as such, but you will have to report it against a CMYK device, and I'll say now that we won't be very active with bugs in the X11 CMYK device, its practically unused.
Printing to an inkjet device depends on the printing workflow, and I have no idea what you are using for that. If its CUPS (and I'm guessing solely based on the fact that you are using an X11 device) then this 'should' just work. But it depends on the complete end-to-end print process, and I have no idea what it is you are doing.
Again note that spot colours will not be available on a CMYK printer, so overprinting spots is probably not going to work the way you expect.
I may be very late to the party but this works for me:
gs -dBATCH -dNOPAUSE -dSimulateOverprint=true \
-sDEVICE=jpegcmyk -sOUTPUTFILE=overprint.jpg overprint.pdf
color settings --> working space
Usability post tells us
at the “Working Spaces” section and select the sRGB IEC61966-2.1 profile
Smashing magazine tells us to
set the working space for RGB to Monitor RGB.
http://viget.com/ also recommends
changing the top drop-down to Monitor Color.
What should we use?
Second part of the question: saving for web:
Should we always uncheck the 'Convert to sRGB'? There is also contradictory tutorials on this one.
Thank you very much in advance!
Web images should be saved whithout any addition data. No color profiles. Many browsers read color profiles of how they are want.
sRGB IEC61966-2.1 or Monitor RGB is the same profile in color settings.
You doesnt need to check convert to rgb. Try to check but you will nothing to see changes if you work in sRGB colors already.