I'm trying to adjust the screen contrast in Objective C for a Cocoa application using kIODisplayContrastKey. I saw a post about adjusting screen brightness here:
Programmatically change Mac display brightness
- (void) setBrightnessTo: (float) level
{
io_iterator_t iterator;
kern_return_t result = IOServiceGetMatchingServices(kIOMasterPortDefault,
IOServiceMatching("IODisplayConnect"),
&iterator);
// If we were successful
if (result == kIOReturnSuccess)
{
io_object_t service;
while ((service = IOIteratorNext(iterator)))
{
IODisplaySetFloatParameter(service, kNilOptions, CFSTR(kIODisplayBrightnessKey), level);
// Let the object go
IOObjectRelease(service);
return;
}
}
}
Code by Alex King from the link above.
And that code worked. So I tried to do the same for contrast by using a different key (kIODisplayContrastKey), but that doesn't seem to work. Does anybody have an idea if that's possible?
I'm using 10.9.3
Related
I'm trying to create a video recorder in a nativescript plugin on the ios side which means that I am using the native Objective C Classes inside of the plugin to have a shared interface in the nativescript app with the android implementation.
I have the camera view loaded and I am trying to get access to the video frames from the AVCaptureSession. I created an object that implements the protocol to get the frames and for the first second the function captureOutput which has a parameter DidOutputSampleBuffer outputs the frames. But from then on all the frames are dropped and I do not know why. I can view that they are dropped because the function in the protocol, captureOutput with parameter DidDropSampleBuffer runs for every frame.
I tried changing the order of intializion for the avcapturesession but that didn't change anything.
Below is the main function with the main code to create the capture session and capture object. While this is typescript nativescript allows you to call native Objective C functions and classes so the logic is the same in objective c. I also create a VideoDelegate object in nativescript which corresponds to a class in Objective C which allows me to implement the protocol to get the video frames from the capture device output.
this._captureSession = AVCaptureSession.new();
//Get the camera
this._captureSession.sessionPreset = AVCaptureSessionPreset640x480;
let inputDevice = null;
this._cameraDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo);
//Get the camera input
let error: NSError = null;
this._captureInput = AVCaptureDeviceInput.deviceInputWithDeviceError(this._cameraDevice);
if(this._captureSession.canAddInput(this._captureInput)){
this._captureSession.addInput(this._captureInput);
}
else{
console.log("couldn't add input");
}
let self = this;
const VideoDelegate = (NSObject as any).extend({
captureOutputDidOutputSampleBufferFromConnection(captureOutput: any,sampleBuffer: any, connection:any): void {
console.log("Captureing Frames");
if(self.startRecording){
self._mp4Writer.appendVideoSample(sampleBuffer);
console.log("Appending Video Samples");
}
},
captureOutputDidDropSampleBufferFromConnection(captureOutput: any,sampleBuffer: any, connection:any): void {
console.log("Dropping Frames");
},
videoCameraStarted(date){
// console.log("CAMERA STARTED");
}
}, {
protocols: [AVCaptureVideoDataOutputSampleBufferDelegate]
});
this._videoDelegate = VideoDelegate.new();
//setting up camera output for frames
this._captureOutput = AVCaptureVideoDataOutput.new();
this._captureQueue = dispatch_queue_create("capture Queue", null);
this._captureOutput.setSampleBufferDelegateQueue(this._videoDelegate,this._captureQueue);
this._captureOutput.alwaysDiscardsLateVideoFrames = false;
this._framePixelFormat = NSNumber.numberWithInt(kCVPixelFormatType_32BGRA);
this._captureOutput.videoSettings = NSDictionary.dictionaryWithObjectForKey(this._framePixelFormat,kCVPixelBufferPixelFormatTypeKey);
this._captureSession.addOutput(this._captureOutput);
this._captureSession.startRunning();
I'm writing application on OS X, which will capture frames from camera.
Is it possible to set capture setting using AVCaptureDevice.activeFormat property? I had tried this, but it didn't work (session preset overrides it).
I found that on IOS it is possible with setting SessionPreset in AVCaptureSession to AVCaptureSessionPresetInputPriority.
The main purpose is to choose more detailed video resolutions than presets.
Updated: April 08, 2020.
In macOS (unlike iOS), a capture session can automatically configure the capture format after you make changes. To prevent automatic changes to the capture format use lockForConfiguration() method. Then call the beginConfiguration() method, set properties (choose one preset out of a dozen, for instance AVCaptureSessionPresetiFrame960x540) and after that call the commitConfiguration() method. In the end you need to put unlockForConfiguration() after changing a device properties.
Or follow these steps:
Call lockForConfiguration() to acquire access to the device’s config properties.
Change the device’s activeFormat property (as mentioned above & below).
Begin capture with the session’s startRunning() method.
Unlock the device with the unlockForConfiguration().
startRunning() and stopRunning() methods must be invoked to start and stop the flow of your data from the inputs to the outputs, respectively.
You must also call lockForConfiguration() before calling the AVCaptureSession method startRunning(), or the session's preset will override the selected active format on the capture device.
However, you might hold onto a lock, without releasing that lock, if you require the device properties to remain unchanged.
Here are details in developer's documentation lockForConfiguration().
If you attempt to set the active format to one not present in the accessible formats, will throw an invalidArgumentException.
Also, there's an explanation how to change properties: macOS AVFoundation Video Capture
In AVCaptureDevice there are two properties. formats and activeFormat. format will return an NSArrary of AVCaptureDeviceFormat with contains all formats exposed by cam. You select any one format from this list and set it to activeFormat. Make sure that you set the format after you receive the exclusive access to the devlce by calling AVCaptureDevice lockForConfigration. After you set the format release the lock with AVCaptureDevice unlockForConfigration. Then start the AVCaptureSession which will give you the video frames of the format you set.
AVCaptureFormat is a wraper for CMFormatDescription. CMVideoFotmatDescription is the concreete subclass of CMFormatDescription. Use CMVideoFormatDescriptionGetDimentions() to get the width and height in the set format. Use CMFormatDescriptionGetMediaSubType() to get the video codec. For raw fotmats video codec mostly is yuvs or vuy2. For compressed formats its h264, dmb1(mjpeg) and many more.
Here's a macOS code snippet written in Swift:
import Cocoa
import AVFoundation
class ViewController: NSViewController,
AVCaptureVideoDataOutputSampleBufferDelegate {
override func viewDidAppear() {
super.viewDidAppear()
setupCameraSession()
view.layer?.addSublayer(previewLayer)
cameraSession.startRunning()
}
lazy var cameraSession: AVCaptureSession = {
let session = AVCaptureSession()
session.sessionPreset = AVCaptureSession.Preset.hd1280x720
return session
}()
lazy var previewLayer: AVCaptureVideoPreviewLayer = {
let preview = AVCaptureVideoPreviewLayer(session: self.cameraSession)
preview.bounds = CGRect(x: 0,
y: 0,
width: self.view.bounds.width,
height: self.view.bounds.height)
preview.position = CGPoint(x: self.view.bounds.midX,
y: self.view.bounds.midY)
preview.videoGravity = AVLayerVideoGravity.resize
return preview
}()
func setupCameraSession() {
let captureDevice = AVCaptureDevice.default(for: AVMediaType.video)
do {
let deviceInput = try AVCaptureDeviceInput(device: captureDevice!)
guard let camera = AVCaptureDevice.default(for: .video)
else { return }
// acquire exclusive access to the device’s properties
try camera.lockForConfiguration()
cameraSession.beginConfiguration()
camera.focusMode = .continuousAutoFocus
camera.flashMode = .on
camera.whiteBalanceMode = .continuousAutoWhiteBalance
if (cameraSession.canAddInput(deviceInput) == true) {
cameraSession.addInput(deviceInput)
}
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) :
NSNumber(value: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange as UInt32)] as [String : Any]
dataOutput.alwaysDiscardsLateVideoFrames = true
if (cameraSession.canAddOutput(dataOutput) == true) {
cameraSession.addOutput(dataOutput)
}
let preset: AVCaptureSession.Preset = .hd4K3840x2160
cameraSession.sessionPreset = preset
cameraSession.commitConfiguration()
camera.unlockForConfiguration()
let queue = DispatchQueue(label: "blah.blah.blah")
dataOutput.setSampleBufferDelegate(self, queue: queue)
} catch let error as NSError {
NSLog("\(error.localizedDescription)")
}
}
}
And here's a code snippet written in Objective-C setting min and max fps:
myCamera = NULL;
if ( NULL != myCamera ) {
if ( [ myCamera lockForConfiguration: NULL ] ) {
[ myCamera setActiveVideoMinFrameDuration: CMTimeMake( 1, 12 ) ];
[ myCamera setActiveVideoMaxFrameDuration: CMTimeMake( 1, 25 ) ];
[ myCamera unlockForConfiguration ];
}
}
return ( NULL != myCamera );
I'm building an "enhance" function for a crime-fighting project I'm working on. The end goal is to be able to do things like in this documentary video: https://www.youtube.com/watch?v=Vxq9yj2pVWk
I have it mostly working but I'm getting hung up on the part where you speak into the computer mic and it just does what you want it to do. Here's my function thus far:
static const NSInteger kLotsX = MAXFLOAT;
static const NSInteger kLotsY = MAXFLOAT;
- (void)letsEnhance:(UIImageView *)imageView {
[imageView setTransform:CGAffineTransformMakeScale(kLotsX, kLotsY)];
// Fill in missing pixels / image-data
}
(Posted April 1st.)
I'm working on an OS X application that displays custom windows on all available spaces of all the connected displays.
I can get an array of the available display objects by calling [NSScreen screens].
What I'm currently missing is a way of telling if the user connects a display to or disconnects a screen from their system.
I have searched the Cocoa documentation for notifications that deal with a scenario like that without much luck, and I refuse to believe that there isn't some sort of system notification that gets posted when changing the number of displays connected to the system.
Any suggestions on how to solve this problem?
There are several ways to achieve that:
You could implement applicationDidChangeScreenParameters: in your app delegate (the method is part of the NSApplicationDelegateProtocol).
Another way is to listen for the NSApplicationDidChangeScreenParametersNotification sent by the default notification center [NSNotificationCenter defaultCenter].
Whenever your delegate method is called or you receive the notification, you can iterate over [NSScreen screens] and see if a display got connected or removed (you have to maintain a display list you can check against at program launch).
A non-Cocoa approach would be via Core Graphics Display services:
You have to implement a reconfiguration function and register it with CGDisplayRegisterReconfigurationCallback(CGDisplayReconfigurationCallBack cb, void* obj);
In your reconfiguration function you can query the state of the affected display. E.g.:
void DisplayReconfigurationCallBack(CGDirectDisplayID display, CGDisplayChangeSummaryFlags flags, void* userInfo)
{
if(display == someDisplayYouAreInterestedIn)
{
if(flags & kCGDisplaySetModeFlag)
{
...
}
if(flags & kCGDisplayRemoveFlag)
{
...
}
if(flags & kCGDisplayDisabledFlag)
{
...
}
}
if(flags & kCGDisplaySetModeFlag || flags & kCGDisplayDisabledFlag || flags & kCGDisplayRemoveFlag)
{
...
}
}
in swift 3.0:
let nc = NotificationCenter.default
nc.addObserver(self,
selector: #selector(screenDidChange),
name: NSNotification.Name.NSApplicationDidChangeScreenParameters,
object: nil)
NC call back:
final func screenDidChange(notification: NSNotification){
let userInfo = notification.userInfo
print(userInfo)
}
So, i am building a program that will stand on a exhibition for public usage, and i got a task to make a inactive state for it. Just display some random videos from a folder on the screen, like a screensaver but in the application.
So what is the best and proper way of checking if the user is inactive?
What i am thinking about is some kind of global timer that gets reset on every user input and if it reaches lets say 1 minute it goes into inactive mode. Are there any better ways?
You can use CGEventSourceSecondsSinceLastEventType
Returns the elapsed time since the last event for a Quartz event
source.
/*
To get the elapsed time since the previous input event—keyboard, mouse, or tablet—specify kCGAnyInputEventType.
*/
- (CFTimeInterval)systemIdleTime
{
CFTimeInterval timeSinceLastEvent = CGEventSourceSecondsSinceLastEventType(kCGEventSourceStateHIDSystemState, kCGAnyInputEventType);
return timeSinceLastEvent;
}
I'm expanding on Parag Bafna's answer. In Qt you can do
#include <ApplicationServices/ApplicationServices.h>
double MyClass::getIdleTime() {
CFTimeInterval timeSinceLastEvent = CGEventSourceSecondsSinceLastEventType(kCGEventSourceStateHIDSystemState, kCGAnyInputEventType);
return timeSinceLastEvent;
}
You also have to add the framework to your .pro file:
QMAKE_LFLAGS += -F/System/Library/Frameworks/ApplicationServices.framework
LIBS += -framework ApplicationServices
The documentation of the function is here
I've found a solution that uses the HID manager, this seems to be the way to do it in Cocoa. (There's another solution for Carbon, but it doesn't work for 64bit OS X.)
Citing Daniel Reese on the Dan and Cheryl's Place blog:
#include <IOKit/IOKitLib.h>
/*
Returns the number of seconds the machine has been idle or -1 on error.
The code is compatible with Tiger/10.4 and later (but not iOS).
*/
int64_t SystemIdleTime(void) {
int64_t idlesecs = -1;
io_iterator_t iter = 0;
if (IOServiceGetMatchingServices(kIOMasterPortDefault,
IOServiceMatching("IOHIDSystem"),
&iter) == KERN_SUCCESS)
{
io_registry_entry_t entry = IOIteratorNext(iter);
if (entry) {
CFMutableDictionaryRef dict = NULL;
kern_return_t status;
status = IORegistryEntryCreateCFProperties(entry,
&dict,
kCFAllocatorDefault, 0);
if (status == KERN_SUCCESS)
{
CFNumberRef obj = CFDictionaryGetValue(dict,
CFSTR("HIDIdleTime"));
if (obj) {
int64_t nanoseconds = 0;
if (CFNumberGetValue(obj,
kCFNumberSInt64Type,
&nanoseconds))
{
// Convert from nanoseconds to seconds.
idlesecs = (nanoseconds >> 30);
}
}
CFRelease(dict);
}
IOObjectRelease(entry);
}
IOObjectRelease(iter);
}
return idlesecs;
}
The code has been slightly modified, to make it fit into the 80-character limit of stackoverflow.
This might sound like a silly question; but why not just set up a screensaver, with a short fuse?
You can listen for the NSNotification named #"com.apple.screensaver.didstart" if you need to do any resets or cleanups when the user wanders away.
Edit: You could also set up the screen saver; wait for it to fire, and then do your own thing when it starts, stopping the screen saver when you display your own videos; but setting up a screen saver the proper way is probably a good idea.
Take a look at UKIdleTimer, maybe it's what you're looking for.