Here are steps to reproduce:
Activate AVAudioSession with .playback category.
Register for AVAudioSession.interruptionNotification
Create two AVPlayers and start them
Interrupt playback by calling Siri/receiving a call by Skype, Cellular and etc.
Expected behavior:
Receiving notification of the audio session interruption with .began state at the start and .ended at the end. Also, as a side effect, Siri doesn't respond to commands.
Real behavior:
Only .began notification is called.
To bring back .ended notification (which is used to continue playback) remove one player.
Question: how to handle the audio session interruption with more than 1 AVPlayer running?
Here I created a simple demo project: https://github.com/denis-obukhov/AVAudioSessionBug
Tested on iOS 14.4
import UIKit
import AVFoundation
class ViewController: UIViewController {
private let player1: AVPlayer? = {
$0.volume = 0.5
return $0
}(AVPlayer())
private let player2: AVPlayer? = {
$0.volume = 0.5
return $0 // return nil for any player to bring back .ended interruption notification
}(AVPlayer())
override func viewDidLoad() {
super.viewDidLoad()
registerObservers()
startAudioSession()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
player1?.replaceCurrentItem(with: makePlayerItem(named: "music1"))
player2?.replaceCurrentItem(with: makePlayerItem(named: "music2"))
[player1, player2].forEach { $0?.play() }
}
private func makePlayerItem(named name: String) -> AVPlayerItem {
let fileURL = Bundle.main.url(
forResource: name,
withExtension: "mp3"
)!
return AVPlayerItem(url: fileURL)
}
private func registerObservers() {
NotificationCenter.default.addObserver(
self, selector: #selector(handleInterruption(_:)),
name: AVAudioSession.interruptionNotification,
object: nil
)
}
private func startAudioSession() {
try? AVAudioSession.sharedInstance().setCategory(.playback)
try? AVAudioSession.sharedInstance().setActive(true)
}
#objc private func handleInterruption(_ notification: Notification) {
print("GOT INTERRUPTION")
guard
let userInfo = notification.userInfo,
let typeValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt,
let type = AVAudioSession.InterruptionType(rawValue: typeValue)
else {
return
}
switch type {
case .began:
print("Interruption BEGAN")
[player1, player2].forEach { $0?.pause() }
case .ended:
// This part isn't called if more than 1 player is playing
print("Interruption ENDED")
[player1, player2].forEach { $0?.play() }
#unknown default:
print("Unknown value")
}
}
}
I just ran into the same issue, and it was driving me crazy for a few days. I'm using two AVQueuePlayer (a subclass of AVPlayer) to play two sets of audio sounds on top of each other, and I get the AVAudioSession.interruptionNotification value of .began when there is an incoming call, but there is no .ended notification when the call ends.
That said, I've found that for some reason, .ended is reliably sent if you instead use two instances of AVAudioPlayer. It also works with one instance of AVAudioPlayer mixed with another instance of AVQueuePlayer. But for some reason using two instances of AVQueuePlayer (or AVPlayer) seems to break it.
Did you ever find a solution for this? For my purposes I need queuing of tracks so I must use AVQueuePlayer, so I'll probably file a bug report with Apple.
I've a ViewController with a NSButton linked to a IBAction let's say perform(). The button is key equivalent to the return key.
However the return key is also used in for a other temporary action. In an other file no related to previous ViewController, I have a NSTextfield which is editable, so the return key would validate the text changes.
I would like to validate my text without firing perform() function called from the key equivalent button.
For the moment the only solution I found is to send a notification textIsBeingEditing when my text field becomeFirstResponder and an other when the delegate function controlTextDidEndEditing.
Here is the selector of my notification:
#objc func trackNameIsBeingEditedNotification(_ notification: Notification) {
guard let value = notification.userInfo?["trackNameIsBeingEdited"]
as? Bool else {
return
}
if value {
backButton.keyEquivalent = ""
backButton.action = nil
} else {
myButton.keyEquivalent = "\r"
myButton.action = #selector(self.perform(_:))
// Here the `perform()` function is fired but I would avoid this behaviour…
}
}
Isn't any way to cancel the key event in order to prevent perform() to be fired just after I set myButton.action = #selector(self.perform(_:)) ?
I see a function called flushBufferedKeyEvents() but I totally doesn't know how to use it
Don't use notifications, use the delegate method optional func control(_ control: NSControl, textShouldEndEditing fieldEditor: NSText) -> Bool to validate the value of the text field.
I'm making a JComponentFactory and a subclass of that is a JLabelFactory. There is a create(Set listeners) method in JLabelFactory and what it's supposed to do is use the given method (below) to add each listener in the set to the created JLabel. I can do it this way:
protected boolean addSpecificListeners(JLabel label, Set<EventListener> listeners) {
for(EventListener e: listeners){
if(e instanceof MouseMotionListener){
label.addMouseMotionListener((MouseMotionListener)e);
}
else if(e instanceof MouseWheelListener){
label.addMouseWheelListener((MouseWheelListener)e);
}
else if....
}
}
This doesn't seem to be a smart way to me. Especially since I will have to go to all the possible Listener types for EVER JComponent I want to add listeners to. Is there a smarter way to do this?
So far tvOS supports two ways to make tv apps, TVML and UIKit, and there is no official mentions about how to mix up things to make a TVML (that is basically XML) User Interface with the native counter part for the app logic and I/O (like playback, streaming, iCloud persistence, etc).
So, which is the best solution to mix TVML and UIKit in a new tvOS app?
In the following I have tried a solution following code snippets adapted from Apple Forums and related questions about JavaScriptCore to ObjC/Swift binding.
This is a simple wrapper class in your Swift project.
import UIKit
import TVMLKit
#objc protocol MyJSClass : JSExport {
func getItem(key:String) -> String?
func setItem(key:String, data:String)
}
class MyClass: NSObject, MyJSClass {
func getItem(key: String) -> String? {
return "String value"
}
func setItem(key: String, data: String) {
print("Set key:\(key) value:\(data)")
}
}
where the delegate must conform a TVApplicationControllerDelegate:
typealias TVApplicationDelegate = AppDelegate
extension TVApplicationDelegate : TVApplicationControllerDelegate {
func appController(appController: TVApplicationController, evaluateAppJavaScriptInContext jsContext: JSContext) {
let myClass: MyClass = MyClass();
jsContext.setObject(myClass, forKeyedSubscript: "objectwrapper");
}
func appController(appController: TVApplicationController, didFailWithError error: NSError) {
let title = "Error Launching Application"
let message = error.localizedDescription
let alertController = UIAlertController(title: title, message: message, preferredStyle:.Alert ) self.appController?.navigationController.presentViewController(alertController, animated: true, completion: { () -> Void in
})
}
func appController(appController: TVApplicationController, didStopWithOptions options: [String : AnyObject]?) {
}
func appController(appController: TVApplicationController, didFinishLaunchingWithOptions options: [String : AnyObject]?) {
}
}
At this point the javascript is very simple like. Take a look at the methods with named parameters, you will need to change the javascript counter part method name:
App.onLaunch = function(options) {
var text = objectwrapper.getItem()
// keep an eye here, the method name it changes when you have named parameters, you need camel case for parameters:
objectwrapper.setItemData("test", "value")
}
App. onExit = function() {
console.log('App finished');
}
Now, supposed that you have a very complex js interface to export like
#protocol MXMJSProtocol<JSExport>
- (void)boot:(JSValue *)status network:(JSValue*)network user:(JSValue*)c3;
- (NSString*)getVersion;
#end
#interface MXMJSObject : NSObject<MXMJSProtocol>
#end
#implementation MXMJSObject
- (NSString*)getVersion {
return #"0.0.1";
}
you can do like
JSExportAs(boot,
- (void)boot:(JSValue *)status network:(JSValue*)network user:(JSValue*)c3 );
At this point in the JS Counter part you will not do the camel case:
objectwrapper.bootNetworkUser(statusChanged,networkChanged,userChanged)
but you are going to do:
objectwrapper.boot(statusChanged,networkChanged,userChanged)
Finally, look at this interface again:
- (void)boot:(JSValue *)status network:(JSValue*)network user:(JSValue*)c3;
The value JSValue* passed in. is a way to pass completion handlers between ObjC/Swift and JavaScriptCore. At this point in the native code you do all call with arguments:
dispatch_async(dispatch_get_main_queue(), ^{
NSNumber *state = [NSNumber numberWithInteger:status];
[networkChanged.context[#"setTimeout"]
callWithArguments:#[networkChanged, #0, state]];
});
In my findings, I have seen that the MainThread will hang if you do not dispatch on the main thread and async. So I will call the javascript "setTimeout" call that calls the completion handler callback.
So the approach I have used here is:
Use JSExportAs to take car of methods with named parameters and avoid to camel case javascript counterparts like callMyParam1Param2Param3
Use JSValue as parameter to get rid of completion handlers. Use callWithArguments on the native side. Use javascript functions on the JS side;
dispatch_async for completion handlers, possibly calling a setTimeout 0-delayed in the JavaScript side, to avoid the UI to freeze.
[UPDATE]
I have updated this question in order to be more clear. I'm finding a technical solution for bridging TVML and UIKit in order to
Understand the best programming model with JavaScriptCode
Have the right bridge from JavaScriptCore to ObjectiveC and
viceversa
Have the best performances when calling JavaScriptCode from Objective-C
This WWDC Video explains how to communicate between JavaScript and Obj-C
Here is how I communicate from Swift to JavaScript:
//when pushAlertInJS() is called, pushAlert(title, description) will be called in JavaScript.
func pushAlertInJS(){
//allows us to access the javascript context
appController!.evaluateInJavaScriptContext({(evaluation: JSContext) -> Void in
//get a handle on the "pushAlert" method that you've implemented in JavaScript
let pushAlert = evaluation.objectForKeyedSubscript("pushAlert")
//Call your JavaScript method with an array of arguments
pushAlert.callWithArguments(["Login Failed", "Incorrect Username or Password"])
}, completion: {(Bool) -> Void in
//evaluation block finished running
})
}
Here is how I communicate from JavaScript to Swift (it requires some setup in Swift):
//call this method once after setting up your appController.
func createSwiftPrint(){
//allows us to access the javascript context
appController?.evaluateInJavaScriptContext({(evaluation: JSContext) -> Void in
//this is the block that will be called when javascript calls swiftPrint(str)
let swiftPrintBlock : #convention(block) (String) -> Void = {
(str : String) -> Void in
//prints the string passed in from javascript
print(str)
}
//this creates a function in the javascript context called "swiftPrint".
//calling swiftPrint(str) in javascript will call the block we created above.
evaluation.setObject(unsafeBitCast(swiftPrintBlock, AnyObject.self), forKeyedSubscript: "swiftPrint" as (NSCopying & NSObjectProtocol)?)
}, completion: {(Bool) -> Void in
//evaluation block finished running
})
}
[UPDATE] For those of you who would like to know what "pushAlert" would look like on the javascript side, I'll share an example implemented in application.js
var pushAlert = function(title, description){
var alert = createAlert(title, description);
alert.addEventListener("select", Presenter.load.bind(Presenter));
navigationDocument.pushDocument(alert);
}
// This convenience funnction returns an alert template, which can be used to present errors to the user.
var createAlert = function(title, description) {
var alertString = `<?xml version="1.0" encoding="UTF-8" ?>
<document>
<alertTemplate>
<title>${title}</title>
<description>${description}</description>
</alertTemplate>
</document>`
var parser = new DOMParser();
var alertDoc = parser.parseFromString(alertString, "application/xml");
return alertDoc
}
You sparked an idea that worked...almost. Once you have displayed a native view, there is no straightforward method as-of-yet to push an TVML-based view onto the navigation stack. What I have done at this time is:
let appDelegate = UIApplication.sharedApplication().delegate as! AppDelegate
appDelegate.appController?.navigationController.popViewControllerAnimated(true)
dispatch_async(dispatch_get_main_queue()) {
tvmlContext!.evaluateScript("showTVMLView()")
}
...then on the JavaScript side:
function showTVMLView() {setTimeout(function(){_showTVMLView();}, 100);}
function _showTVMLView() {//push the next document onto the stack}
This seems to be the cleanest way to move execution off the main thread and onto the JSVirtualMachine thread and avoid the UI lockup. Notice that I had to pop at the very least the current native view controller, as it was getting sent a deadly selector otherwise.
I'm having a problem finding out which NSTextfield is focused.
I am building a multi-language form and have several NSTextfields for data entry. I have to change the text input source for some of the NSTextfields during data entry, and I need it to happen automatically.
For now, I can change the text input source as I mentioned here without problem.
The problem that I have is to change the input source right when the NSTextfield becomes focused. If I use the controlTextDidBeginEditing: delegate method it changes the source input after typing the first letter.
This means that I lose the first word I typed in proper language.
Is there any delegate to find it ?
You can subclass your NSTextField and override - (BOOL)becomeFirstResponder (NSResponder) to respond to this kind of event.
You can also try control:textShouldBeginEditing: instead.
You will need to subclass NSTextField
Swift 3+
class FocusingTextField : NSTextField {
var isFocused : Bool = false
override func becomeFirstResponder() -> Bool {
let orig = super.becomeFirstResponder()
if(orig) { self.isFocused = true }
return orig
}
override func textDidEndEditing(_ notification: Notification) {
super.textDidEndEditing(notification)
self.isFocused = false
}
override func selectText(_ sender: Any?) {
super.selectText(sender)
self.isFocused = true
}
}
self.view.window?.firstResponder inside your view controller would give a NSTextView