AVCaptureDeviceInput drops frames after first second running AVCaptureSession, with nativescript - objective-c

I'm trying to create a video recorder in a nativescript plugin on the ios side which means that I am using the native Objective C Classes inside of the plugin to have a shared interface in the nativescript app with the android implementation.
I have the camera view loaded and I am trying to get access to the video frames from the AVCaptureSession. I created an object that implements the protocol to get the frames and for the first second the function captureOutput which has a parameter DidOutputSampleBuffer outputs the frames. But from then on all the frames are dropped and I do not know why. I can view that they are dropped because the function in the protocol, captureOutput with parameter DidDropSampleBuffer runs for every frame.
I tried changing the order of intializion for the avcapturesession but that didn't change anything.
Below is the main function with the main code to create the capture session and capture object. While this is typescript nativescript allows you to call native Objective C functions and classes so the logic is the same in objective c. I also create a VideoDelegate object in nativescript which corresponds to a class in Objective C which allows me to implement the protocol to get the video frames from the capture device output.
this._captureSession = AVCaptureSession.new();
//Get the camera
this._captureSession.sessionPreset = AVCaptureSessionPreset640x480;
let inputDevice = null;
this._cameraDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo);
//Get the camera input
let error: NSError = null;
this._captureInput = AVCaptureDeviceInput.deviceInputWithDeviceError(this._cameraDevice);
if(this._captureSession.canAddInput(this._captureInput)){
this._captureSession.addInput(this._captureInput);
}
else{
console.log("couldn't add input");
}
let self = this;
const VideoDelegate = (NSObject as any).extend({
captureOutputDidOutputSampleBufferFromConnection(captureOutput: any,sampleBuffer: any, connection:any): void {
console.log("Captureing Frames");
if(self.startRecording){
self._mp4Writer.appendVideoSample(sampleBuffer);
console.log("Appending Video Samples");
}
},
captureOutputDidDropSampleBufferFromConnection(captureOutput: any,sampleBuffer: any, connection:any): void {
console.log("Dropping Frames");
},
videoCameraStarted(date){
// console.log("CAMERA STARTED");
}
}, {
protocols: [AVCaptureVideoDataOutputSampleBufferDelegate]
});
this._videoDelegate = VideoDelegate.new();
//setting up camera output for frames
this._captureOutput = AVCaptureVideoDataOutput.new();
this._captureQueue = dispatch_queue_create("capture Queue", null);
this._captureOutput.setSampleBufferDelegateQueue(this._videoDelegate,this._captureQueue);
this._captureOutput.alwaysDiscardsLateVideoFrames = false;
this._framePixelFormat = NSNumber.numberWithInt(kCVPixelFormatType_32BGRA);
this._captureOutput.videoSettings = NSDictionary.dictionaryWithObjectForKey(this._framePixelFormat,kCVPixelBufferPixelFormatTypeKey);
this._captureSession.addOutput(this._captureOutput);
this._captureSession.startRunning();

Related

Can flutter native side use eventchannel transfer the MAP data?

I am running the flutter platform channel "Eventchannel" with windows platform
I know this not mention in official platform channel document
but I found some example that work with windows and I test it.
Now, I can transfer the single data from below code:
void initEventChannel(flutter::FlutterEngine* flutter_instance) {
const static std::string event_channel_name("getFromWinsBrowsingDevice");
const flutter::StandardMethodCodec& codec = flutter::StandardMethodCodec::GetInstance();
flutter::EventChannel event_channel_name_(flutter_instance->messenger(), event_channel_name, &codec);
event_channel_name_.SetStreamHandler(
std::make_unique<flutter::StreamHandlerFunctions<flutter::EncodableValue>>(on_listen, on_cancel));
}
std::unique_ptr<flutter::StreamHandlerError<flutter::EncodableValue>> on_listen(
const flutter::EncodableValue* agruments,
std::unique_ptr<flutter::EventSink<flutter::EncodableValue>>&& events) {
std::thread BrowsingThread(sentBrowsingEvent, std::move(events));
BrowsingThread.detach();
return NULL;
}
void sentBrowsingEvent(std::unique_ptr<flutter::EventSink<flutter::EncodableValue>>&& events) {
//Browsing_Check_routine, &browsing_test
//creat MAP in Browsing_Check_routine to feedback the event to UI
while (1) {
Browsing_Check_routine()
events.get()->Success(flutter::EncodableValue(BrowsingDeviceMap)); // This will fail while send all MAP data
}
std::this_thread::sleep_for(std::chrono::seconds(1));
#endif
}
}
I want to receive some help, how should I fix this error to pass the MAP data to flutter side?
events.get()->Success(flutter::EncodableValue(BrowsingDeviceMap)); //
This will fail while send all MAP data
Thank you!
Edit:
I realize my map is c++ type, if I want to pass the map with the EncodableValue, the variable must declare in EncodableMap type.

How to create a video calling function with swiftUI?

I am creating a messaging app with SwiftUI, and I want to add video calling function to that. I used SkyWay webRTC API (https://webrtc.ecl.ntt.com/en/) to achieve this and I could build an example project written in swift code. Now what I am trying is linking local stream with SKWVideo view and wrapping it up into UIViewRepresentable. But I got stuck with bellow error message.
import SwiftUI
import SkyWay
import UIKit
struct ContentView: View {
#State var video = SKWVideo()
var body: some View {
VideoView(localStreamView: $video)
}
}
struct VideoView: UIViewRepresentable {
#Binding var localStreamView: SKWVideo
func makeUIView(context: Context) -> SKWVideo {
let option: SKWPeerOption = SKWPeerOption.init()
option.key = "xxxx"
option.domain = "localhost"
let peer = SKWPeer(options: option)
SKWNavigator.initialize(peer!)
let constraints: SKWMediaConstraints = SKWMediaConstraints()
let localStream = SKWNavigator.getUserMedia(constraints)
localStream?.addVideoRenderer(localStreamView, track: 0)
return localStreamView
}
func updateUIView(_ uiView: SKWVideo, context: Context) {
//
}
}
I got this error
Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Modifications to the layout engine must not be performed from a background thread after it has been accessed from the main thread.'
I ran it on a iPhone 7 real device, and I wrote plist settings. I have completely no idea right now. Please help me..
iOS 13.0
xcode 11.0

AVFoundation capturing video with custom resolution

I'm writing application on OS X, which will capture frames from camera.
Is it possible to set capture setting using AVCaptureDevice.activeFormat property? I had tried this, but it didn't work (session preset overrides it).
I found that on IOS it is possible with setting SessionPreset in AVCaptureSession to AVCaptureSessionPresetInputPriority.
The main purpose is to choose more detailed video resolutions than presets.
Updated: April 08, 2020.
In macOS (unlike iOS), a capture session can automatically configure the capture format after you make changes. To prevent automatic changes to the capture format use lockForConfiguration() method. Then call the beginConfiguration() method, set properties (choose one preset out of a dozen, for instance AVCaptureSessionPresetiFrame960x540) and after that call the commitConfiguration() method. In the end you need to put unlockForConfiguration() after changing a device properties.
Or follow these steps:
Call lockForConfiguration() to acquire access to the device’s config properties.
Change the device’s activeFormat property (as mentioned above & below).
Begin capture with the session’s startRunning() method.
Unlock the device with the unlockForConfiguration().
startRunning() and stopRunning() methods must be invoked to start and stop the flow of your data from the inputs to the outputs, respectively.
You must also call lockForConfiguration() before calling the AVCaptureSession method startRunning(), or the session's preset will override the selected active format on the capture device.
However, you might hold onto a lock, without releasing that lock, if you require the device properties to remain unchanged.
Here are details in developer's documentation lockForConfiguration().
If you attempt to set the active format to one not present in the accessible formats, will throw an invalidArgumentException.
Also, there's an explanation how to change properties: macOS AVFoundation Video Capture
In AVCaptureDevice there are two properties. formats and activeFormat. format will return an NSArrary of AVCaptureDeviceFormat with contains all formats exposed by cam. You select any one format from this list and set it to activeFormat. Make sure that you set the format after you receive the exclusive access to the devlce by calling AVCaptureDevice lockForConfigration. After you set the format release the lock with AVCaptureDevice unlockForConfigration. Then start the AVCaptureSession which will give you the video frames of the format you set.
AVCaptureFormat is a wraper for CMFormatDescription. CMVideoFotmatDescription is the concreete subclass of CMFormatDescription. Use CMVideoFormatDescriptionGetDimentions() to get the width and height in the set format. Use CMFormatDescriptionGetMediaSubType() to get the video codec. For raw fotmats video codec mostly is yuvs or vuy2. For compressed formats its h264, dmb1(mjpeg) and many more.
Here's a macOS code snippet written in Swift:
import Cocoa
import AVFoundation
class ViewController: NSViewController,
AVCaptureVideoDataOutputSampleBufferDelegate {
override func viewDidAppear() {
super.viewDidAppear()
setupCameraSession()
view.layer?.addSublayer(previewLayer)
cameraSession.startRunning()
}
lazy var cameraSession: AVCaptureSession = {
let session = AVCaptureSession()
session.sessionPreset = AVCaptureSession.Preset.hd1280x720
return session
}()
lazy var previewLayer: AVCaptureVideoPreviewLayer = {
let preview = AVCaptureVideoPreviewLayer(session: self.cameraSession)
preview.bounds = CGRect(x: 0,
y: 0,
width: self.view.bounds.width,
height: self.view.bounds.height)
preview.position = CGPoint(x: self.view.bounds.midX,
y: self.view.bounds.midY)
preview.videoGravity = AVLayerVideoGravity.resize
return preview
}()
func setupCameraSession() {
let captureDevice = AVCaptureDevice.default(for: AVMediaType.video)
do {
let deviceInput = try AVCaptureDeviceInput(device: captureDevice!)
guard let camera = AVCaptureDevice.default(for: .video)
else { return }
// acquire exclusive access to the device’s properties
try camera.lockForConfiguration()
cameraSession.beginConfiguration()
camera.focusMode = .continuousAutoFocus
camera.flashMode = .on
camera.whiteBalanceMode = .continuousAutoWhiteBalance
if (cameraSession.canAddInput(deviceInput) == true) {
cameraSession.addInput(deviceInput)
}
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as NSString) :
NSNumber(value: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange as UInt32)] as [String : Any]
dataOutput.alwaysDiscardsLateVideoFrames = true
if (cameraSession.canAddOutput(dataOutput) == true) {
cameraSession.addOutput(dataOutput)
}
let preset: AVCaptureSession.Preset = .hd4K3840x2160
cameraSession.sessionPreset = preset
cameraSession.commitConfiguration()
camera.unlockForConfiguration()
let queue = DispatchQueue(label: "blah.blah.blah")
dataOutput.setSampleBufferDelegate(self, queue: queue)
} catch let error as NSError {
NSLog("\(error.localizedDescription)")
}
}
}
And here's a code snippet written in Objective-C setting min and max fps:
myCamera = NULL;
if ( NULL != myCamera ) {
if ( [ myCamera lockForConfiguration: NULL ] ) {
[ myCamera setActiveVideoMinFrameDuration: CMTimeMake( 1, 12 ) ];
[ myCamera setActiveVideoMaxFrameDuration: CMTimeMake( 1, 25 ) ];
[ myCamera unlockForConfiguration ];
}
}
return ( NULL != myCamera );

SceneKit presentScene(_withTransition:incomingPointOfView completionHandler) crash with dynamically loaded SCNScene

I'm trying to transition from one scene to another, but when I call presentScene there is a crash!
The scenes are not stored in a class or referenced, they are loaded directly into the presentScene call.
Screenshot of crash in Xcode:
My simple minimal project is here: https://dl.dropboxusercontent.com/u/6979623/SceneKitTransitionTest.zip
MKScene is just a subclass of SCNScene, because I would like to know when a scene is deinited to be sure that is it.
self.gameView!.scene = MKScene(named:"art.scnassets/scene1.scn")
then later I call
let scnView:SCNView = self.gameView! as SCNView
let skTransition:SKTransition = SKTransition.crossFadeWithDuration(1.0)
skTransition.pausesIncomingScene = false
skTransition.pausesOutgoingScene = false
self.sceneToggler = !self.sceneToggler
// transition
scnView.presentScene((self.sceneToggler ? MKScene(named:"art.scnassets/scene1.scn")! : MKScene(named:"art.scnassets/scene2.scn")!), withTransition:skTransition, incomingPointOfView:nil, completionHandler:nil)
If I keep a reference to the scene in my class then it works – but that's not what I want. I just want to transition to a different scene and leave the current scene behind deinited.
Why is this crashing?
It seems like a simply task…
That's a bug in SceneKit. Workaround: keep a reference to the outgoing scene before calling "presentScene" and release it after that call.
I was able to reproduce your crash with a somewhat simpler project. It does not use MKScene and does not use notifications to trigger the transition. It crashes on the second attempt to load.
I have filed this at https://bugreport.apple.com as rdar://24012973, which you my wish to dupe, along with your longer project.
Here's my simplified ViewController.swift. Switching between the SCNScene properties (lines 25/29) or on-the-fly loads (lines 24/28) toggles between correct and crashing behavior. That is,
nextScene = SCNScene(named:"art.scnassets/scene2.scn")!
fails, and
nextScene = scene2!
works.
// ViewController.swift
import Cocoa
import SceneKit
import SpriteKit
class ViewController: NSViewController {
#IBOutlet weak var sceneView: SCNView!
private var sceneToggler:Bool = false
private var scene1: SCNScene? = SCNScene(named:"art.scnassets/scene1.scn")
private var scene2: SCNScene? = SCNScene(named:"art.scnassets/scene2.scn")
private func nextSceneToLoad() -> SCNScene {
let nextScene: SCNScene
if (sceneToggler) {
//nextScene = SCNScene(named:"art.scnassets/scene1.scn")!
nextScene = scene1!
print ("scene1")
}
else {
nextScene = SCNScene(named:"art.scnassets/scene2.scn")!
//nextScene = scene2!
print ("scene2")
}
print (nextScene)
sceneToggler = !sceneToggler
return nextScene
}
override func mouseUp(theEvent: NSEvent) {
let skTransition:SKTransition = SKTransition.fadeWithDuration(5.0)
skTransition.pausesIncomingScene = false
skTransition.pausesOutgoingScene = false
sceneView.presentScene(nextSceneToLoad(),
withTransition:skTransition,
incomingPointOfView:nil, completionHandler:nil)
super.mouseUp(theEvent)
}
}

How to bridge TVML/JavaScriptCore to UIKit/Objective-C (Swift)?

So far tvOS supports two ways to make tv apps, TVML and UIKit, and there is no official mentions about how to mix up things to make a TVML (that is basically XML) User Interface with the native counter part for the app logic and I/O (like playback, streaming, iCloud persistence, etc).
So, which is the best solution to mix TVML and UIKit in a new tvOS app?
In the following I have tried a solution following code snippets adapted from Apple Forums and related questions about JavaScriptCore to ObjC/Swift binding.
This is a simple wrapper class in your Swift project.
import UIKit
import TVMLKit
#objc protocol MyJSClass : JSExport {
func getItem(key:String) -> String?
func setItem(key:String, data:String)
}
class MyClass: NSObject, MyJSClass {
func getItem(key: String) -> String? {
return "String value"
}
func setItem(key: String, data: String) {
print("Set key:\(key) value:\(data)")
}
}
where the delegate must conform a TVApplicationControllerDelegate:
typealias TVApplicationDelegate = AppDelegate
extension TVApplicationDelegate : TVApplicationControllerDelegate {
func appController(appController: TVApplicationController, evaluateAppJavaScriptInContext jsContext: JSContext) {
let myClass: MyClass = MyClass();
jsContext.setObject(myClass, forKeyedSubscript: "objectwrapper");
}
func appController(appController: TVApplicationController, didFailWithError error: NSError) {
let title = "Error Launching Application"
let message = error.localizedDescription
let alertController = UIAlertController(title: title, message: message, preferredStyle:.Alert ) self.appController?.navigationController.presentViewController(alertController, animated: true, completion: { () -> Void in
})
}
func appController(appController: TVApplicationController, didStopWithOptions options: [String : AnyObject]?) {
}
func appController(appController: TVApplicationController, didFinishLaunchingWithOptions options: [String : AnyObject]?) {
}
}
At this point the javascript is very simple like. Take a look at the methods with named parameters, you will need to change the javascript counter part method name:
App.onLaunch = function(options) {
var text = objectwrapper.getItem()
// keep an eye here, the method name it changes when you have named parameters, you need camel case for parameters:
objectwrapper.setItemData("test", "value")
}
App. onExit = function() {
console.log('App finished');
}
Now, supposed that you have a very complex js interface to export like
#protocol MXMJSProtocol<JSExport>
- (void)boot:(JSValue *)status network:(JSValue*)network user:(JSValue*)c3;
- (NSString*)getVersion;
#end
#interface MXMJSObject : NSObject<MXMJSProtocol>
#end
#implementation MXMJSObject
- (NSString*)getVersion {
return #"0.0.1";
}
you can do like
JSExportAs(boot,
- (void)boot:(JSValue *)status network:(JSValue*)network user:(JSValue*)c3 );
At this point in the JS Counter part you will not do the camel case:
objectwrapper.bootNetworkUser(statusChanged,networkChanged,userChanged)
but you are going to do:
objectwrapper.boot(statusChanged,networkChanged,userChanged)
Finally, look at this interface again:
- (void)boot:(JSValue *)status network:(JSValue*)network user:(JSValue*)c3;
The value JSValue* passed in. is a way to pass completion handlers between ObjC/Swift and JavaScriptCore. At this point in the native code you do all call with arguments:
dispatch_async(dispatch_get_main_queue(), ^{
NSNumber *state = [NSNumber numberWithInteger:status];
[networkChanged.context[#"setTimeout"]
callWithArguments:#[networkChanged, #0, state]];
});
In my findings, I have seen that the MainThread will hang if you do not dispatch on the main thread and async. So I will call the javascript "setTimeout" call that calls the completion handler callback.
So the approach I have used here is:
Use JSExportAs to take car of methods with named parameters and avoid to camel case javascript counterparts like callMyParam1Param2Param3
Use JSValue as parameter to get rid of completion handlers. Use callWithArguments on the native side. Use javascript functions on the JS side;
dispatch_async for completion handlers, possibly calling a setTimeout 0-delayed in the JavaScript side, to avoid the UI to freeze.
[UPDATE]
I have updated this question in order to be more clear. I'm finding a technical solution for bridging TVML and UIKit in order to
Understand the best programming model with JavaScriptCode
Have the right bridge from JavaScriptCore to ObjectiveC and
viceversa
Have the best performances when calling JavaScriptCode from Objective-C
This WWDC Video explains how to communicate between JavaScript and Obj-C
Here is how I communicate from Swift to JavaScript:
//when pushAlertInJS() is called, pushAlert(title, description) will be called in JavaScript.
func pushAlertInJS(){
//allows us to access the javascript context
appController!.evaluateInJavaScriptContext({(evaluation: JSContext) -> Void in
//get a handle on the "pushAlert" method that you've implemented in JavaScript
let pushAlert = evaluation.objectForKeyedSubscript("pushAlert")
//Call your JavaScript method with an array of arguments
pushAlert.callWithArguments(["Login Failed", "Incorrect Username or Password"])
}, completion: {(Bool) -> Void in
//evaluation block finished running
})
}
Here is how I communicate from JavaScript to Swift (it requires some setup in Swift):
//call this method once after setting up your appController.
func createSwiftPrint(){
//allows us to access the javascript context
appController?.evaluateInJavaScriptContext({(evaluation: JSContext) -> Void in
//this is the block that will be called when javascript calls swiftPrint(str)
let swiftPrintBlock : #convention(block) (String) -> Void = {
(str : String) -> Void in
//prints the string passed in from javascript
print(str)
}
//this creates a function in the javascript context called "swiftPrint".
//calling swiftPrint(str) in javascript will call the block we created above.
evaluation.setObject(unsafeBitCast(swiftPrintBlock, AnyObject.self), forKeyedSubscript: "swiftPrint" as (NSCopying & NSObjectProtocol)?)
}, completion: {(Bool) -> Void in
//evaluation block finished running
})
}
[UPDATE] For those of you who would like to know what "pushAlert" would look like on the javascript side, I'll share an example implemented in application.js
var pushAlert = function(title, description){
var alert = createAlert(title, description);
alert.addEventListener("select", Presenter.load.bind(Presenter));
navigationDocument.pushDocument(alert);
}
// This convenience funnction returns an alert template, which can be used to present errors to the user.
var createAlert = function(title, description) {
var alertString = `<?xml version="1.0" encoding="UTF-8" ?>
<document>
<alertTemplate>
<title>${title}</title>
<description>${description}</description>
</alertTemplate>
</document>`
var parser = new DOMParser();
var alertDoc = parser.parseFromString(alertString, "application/xml");
return alertDoc
}
You sparked an idea that worked...almost. Once you have displayed a native view, there is no straightforward method as-of-yet to push an TVML-based view onto the navigation stack. What I have done at this time is:
let appDelegate = UIApplication.sharedApplication().delegate as! AppDelegate
appDelegate.appController?.navigationController.popViewControllerAnimated(true)
dispatch_async(dispatch_get_main_queue()) {
tvmlContext!.evaluateScript("showTVMLView()")
}
...then on the JavaScript side:
function showTVMLView() {setTimeout(function(){_showTVMLView();}, 100);}
function _showTVMLView() {//push the next document onto the stack}
This seems to be the cleanest way to move execution off the main thread and onto the JSVirtualMachine thread and avoid the UI lockup. Notice that I had to pop at the very least the current native view controller, as it was getting sent a deadly selector otherwise.