NSColor, 10.6 and Gamma 2.2 - objective-c

With Snow Leopard the default gamma changed from 1.8 to 2.2. I happen to be working on a few Mac apps that use a very dark custom colour scheme provided by Cocoa. On 10.5 it looks fine but on 10.6 with the new gamma it's much darker and really hard on the eyes.
The colour scheme is defined using numerous [NSColor colorWithCalibratedRed:green:blue:alpha:] objects within a theme class.
Is there any way to 'convert' an NSColor object so that it displays on 10.6 exactly as it would on 10.5?
I know this can be achieved globally from within System Preferences but that's not what I'm after.

The best thing to do is store the color profile for the display on which the color looked good.
Then, use the color profile for the display currently in use to to covert the color.
Basically, what you will have is:
(Original Color with Original Profile) converted to (New Color with current color profile).
You will always have three of the four items - you just need to compute the New Color.
For more information, I would suggest reading:
http://developer.apple.com/mac/library/documentation/cocoa/conceptual/DrawColor/DrawColor.html

The only real problems I have are with dark coloured gradients. 10.4 is still a valid target so I have been using my own gradient wrapper class based around CGShading objects for some time (NSGradient is 10.5+ only).
A simple non-linear correction curve based on the formula below can help:
colour_component = pow(colour_component, 1.0/1.19);
The 1.19 value can be adjusted to create different correction curves.
If 10.6 is detected at run time (using Gestalt) the curve is applied to each of the red, green and blue components for both start and end colours before the gradient is calculated. I have left the alpha values untouched.
I also added a handy user preference to turn this on and off.

Related

How does Mac OS X Finder determine header color?

I am designing a table-like control using Cocoa/Objective-C. Since I want to emulate the look of Finder on Mac, I retrieve system colors when drawing my table.
In particular, use [NSColor controlAlternatingRowBackgroundColors] for the table rows, which looks reasonably good in my opinion. However, when it comes to the table headers, I have some issues:
[NSColor headerTextColor] yields a black color (which is fine for me, since Finder's header captions are black, as well), but I expected [NSColor headerColor] to yield a color against which the text can be read well. However, [NSColor headerColor] yields a pretty dark gray - obviously differing from Finder's header color.
So, is it normal for headerColor to be that dark? How does Finder determine its header colors (is this even documented)?
Table view headers in Finder haven't been a solid color since before Mac OS X 10.0. First they were "lickable" Aqua blue (or white, unselected) gradients, then they got a bit flatter over time, and since Yosemite (10.10) they've used the blurry translucent effect that's otherwise accessible to developers as NSVisualEffectView.
In any case, headerColor and several other NSColor system colors (like gridColor) don't reflect current UI designs. It's probably worth filing a bug to Apple about either deprecating the methods, mentioning in the documentation exactly which colors are still used for what, or both.
If you want headers that match Finder, and you're not using NSTableView (which gets them for free), NSVisualEffectView is probably what you want.

Images appear "washed out" based on background color, using EMGU Camera library

I'm using the EMGU CV .NET library. I noticed that when I take pictures of anything with color, the colors usually get "washed out" if the background is dark(ish). General rule of thumb I've found is that, the darker the background is, the more washed out the colors are.
Here is how I'm retrieving the image from the camera with EMGU.
Dim imgFeed As Bitmap = mCamera.RetrieveBgrFrame.ToBitmap
In the images below (cropped out some of the background on both), the left image is on dry white cement and the right image is on wet white cement. You can see the "washed out" color especially on the first tag, which is bright orange duct tape.
Here is another image, taken on black pavement in the sun, which in reality is much darker than the white cement, but appears similar in color to the background in the wet cement image above.
Is there some sort of auto-balancing that's occurring in the EMGU library? If so, can I stop this from happening? I need to see the colors more clearly than the background. I've read about _EqualizeHist() and I implemented it, but that did not help me see the colors any more clearly; adding contrast to the image didn't really help because the colors were already close to white.
Update
After reading Spark's answer, I found the SetCaptureProperty() method. I see that you can disable the auto exposure property by setting the value to 0 as shown below.
mCamera.SetCaptureProperty(CvEnum.CAP_PROP.CV_CAP_PROP_AUTO_EXPOSURE, 0.0)
Sadly though, with the particular camera I'm using, it looks like the driver does not support changing this property.
This is nothing to do with the algorithm. It is the behavior of Auto Exposure (AEC) algorithm running inside camera chip. Try disabling auto exposure control of camera and reduce the manual exposure level.
Little theory: Most of the AEC algorithm works with full frame weighted method. So in the sample which you showed for white washed case, the black background occupies more portion of the image, which make AEC algorithm to assume that the image is too dark and hence increase exposure level internally.

Is it possible to control the amount of vibrancy of the translucent/blurry background in Yosemite?

I would like to know whether it is possible to control the amount of translucency in the so-called vibrancy effect introduced recently by Yosemite, which can be implemented in an Objective-C app by employing the NSVisualEffectView class.
Here is an example to be more specific. Consider the translucent effect which is shown by Yosemite OS X when the volume level is changed:
The vibrancy is much stronger than what one obtains by using a simple NSVisualEffectView (shown in the following image)
If we compare the two images — please, ignore the different form of the speakers but focus on the background — we see that the amount of vibrancy (the strength in the Gaussian blurring effect) is much stronger in the Yosemite OS X volume window instead of my app using NSVisualEffectView. How can one obtain that?
In OS X Yosemite Apple introduced new materials that can be applied to NSVisualEffectView.
From the AppKit Release Notes for OS X v10.11:
NSVisualEffectView has additional materials available, and they are now organized in two types of categories. First, there are abstract system defined materials defined by how they should be used: NSVisualEffectMaterialAppearanceBased, NSVisualEffectMaterialTitlebar, NSVisualEffectMaterialMenu (new in 10.11), NSVisualEffectMaterialPopover (new in 10.11), and NSVisualEffectMaterialSidebar (new in 10.11). Use these materials when you are attempting to create a design that mimics these standard UI pieces. Next, there are specific palette materials that can be used more directly to create a specific design or look. These are: NSVisualEffectMaterialLight, NSVisualEffectMaterialDark, NSVisualEffectMaterialMediumLight (new to 10.11), and NSVisualEffectMaterialUltraDark (new to 10.11). These colors may vary slightly depending on the blendingMode set on the NSVisualEffectView; in some cases, they may be the same as another material.
Even though this only applies for OS X El Capitan, you can now create a more "close to original" blur effect for your view. I assume Apple uses the NSVisualEffectMaterialMediumLight material for its volume view.
I achieve this effect as follows
have a NSVisualEffect view to get vibrancy
have a custom view on top of the visual effect view the same size
set the background color of the custom view to white and alpha of 0 (completely transparent)
increase the alpha of the custom view to make it less translucent (less blurry)

Programmatically, how does hue blending work in photoshop?

In Photoshop you can set a layer's blending mode to be "Hue". If that layer is, for example, filled with blue then it seems to take the layer below and makes it all blue wherever a non-whiteish color exists.
I'm wondering what it's actually doing though. If I have a background layer with a pixel aarrggbb and the layer on top of that is set to blend mode "Hue" and there's a pixel aarrggbb on that layer, how are those two values combined to give the result that we see?
It doesn't just drop the rrggbb from the layer below. If it did that it'd color white and black as well. It also wouldn't allow color variations through.
If a background pixel is 0xff00ff00 and the corresponding hue layer pixel is 0xff0000ff then I'm assuming the end result will just be 0xff0000ff because the ff blue replaces the ff green. But, if the background pixel is 0x55112233 and the hue layer pixel is 0xff0000ff, how does it come up with the shade of blue that it comes up with?
The reason I ask is that I'd like to take various images and change the hue of the image programmatically in my app. Rather than storing 8 different versions of the same image with different colors, I'd like to store one image and color it as needed.
I've been researching a way to replicate that blending mode in javascript/canvas but I've only come up with the "colorize" filter/blend mode. (Examples below)
Colorize algorithm:
convert the colors from RGB to HSL;
change the Hue value to the wanted one (in my case 172⁰ or 0.477);
revert the update HSL to RGB
Note: this is ok on the desktop but it's noticeably slow on a smartphone, I found.
You can see the difference by comparing these three images. Original:
colorize:
Fireworks' "blend hue" algorithm (which I think is the same as Photoshop's):
The colorize filter might be a good substitute.
RGB/HSL conversion question
Hue/Chroma and HSL on Wikipedia
I found an algorithm to convert RGB to HSV here:
http://www.cs.rit.edu/~ncs/color/t_convert.html
Of course, at the bottom of that page it mentions that the Java Color object already has methods for converting between RGB and HSV, so I just used that.

changing color of monitor

i would like to program a little app that will change the colors of the screen. im not talking about the darkness. i want it to mimic what it would look like if for example you put on blue lenses or red lenses. so i would like to input the color and i want the screen to look as though i put on lenses of that particular color. well i actually need the program to semi-permanently change the users experience on the computer. i need the computer for the entire session that it is turned on to be changed this color
Transparent, Click Through forms might help you out. It makes a nice see through form that lets mouse clicks pass through it. The solution is in VS2003 format, but it upsizes to 2008 nicely. You could take that sample, rip the sliders off, get rid of the borders and make it fullscreen + topmost. I don't know if it'll accurately simulate a lens though, someone more into optics can tell me if I'm wrong :-)
If the lenses you are trying to simulate are red, green or blue, simply zeroing the other two colour components of each pixel should work. A coloured filter lens works by passing only a certain wavelength of light, and absorbing the others. Zeroing the non-desired components of the colour should simulate this accurately, I believe.
To simulate cyan, magenta, or yellow lenses, zeroing the one other colour component (e.g. the red component in the case of cyan tinted glasses) should work.
I'm not sure how to generalise beyond these simple cases. I suspect converting to say HSV and filtering based on the hue might work.
To change this for the entire system and use it in interactions with ordinary programs, you could change the colour profile for the display. For paletted/indexed-colour displays, this could be done by changing the colour look-up table (CLUT) for the display adapter. PowerStrip is a handy utility with versatile colour controls that should be able to achieve this quickly and easily on modern display adapters (e.g. by adjusting the red, green and blue response curves independently).
I came across Color Oracle and thought it might help. Here is the short description:
Color Oracle is a colorblindness simulator for Windows, Mac and Linux. It takes the guesswork out of designing for color blindness by showing you in real time what people with common color vision impairments will see.
Take a snapshot of the screen, convert each pixel into its grayscale value, then change the pixel value to a percentage of red. This will preserve the contrast throughout the image while also presenting a red tone.
To convert to grayscale in C#:
https://web.archive.org/web/20141230145627/http://bobpowell.net/grayscale.aspx
Then, to convert to a shade of red, zero out the values in the green and blue for each pixel.
(You can probably do the above in one shot, but this should get you started.)