Closing a Tab from a Safari App Extension - safari

I am having a surprisingly hard time finding a way to close a tab from a Safari App Extension.
I can open a tab with
SFSafariApplication.getActiveWindow(completionHandler: {
$0?.openTab(with: url, makeActiveIfPossible: true)
})
Yet neither the returned SFSafariTab, nor the SFSafariWindow have close() (or performClose()).
I can send a friendly message to the JS side of the extension and run window.close();, but this is limited to tabs that were also created via Javascript. Any tabs opened by the user, or target="_blank" are blocked off.
The third option seems to be the deprecated Safari JS Extension API. I have not yet succeeded in making this work, and it feels icky to invest time into such a dead end.

Now in safari 12.1 have facility to close current tab.
/// Closes this tab. If this is the last tab in its window, the window is also closed.
- (void)close SF_AVAILABLE_MAC_SAFARI(12_1);

Related

How to recording current screen(not open popup) and audio by muaz-khan WebRTC-Experiment webrtc

muaz-khan WebRTC-Experiment
How to edit the extension to be as I want
Thanks
No its not possible to do within browser, its the matter of end user privacy.
You can build your own native Windows/Mac application to get rid of this
Chrome is providing screen/window/tab capturing through chooseDesktopMedia API, it is available only from chrome extension and we cant call this API from web app.
That demo extension is showing how to use chooseDesktopMedia.
We have no control on the screen selection popup, we can only choose the
combination of screen/window/tab/audio with the DesktopCaptureSourceType

Capybara Selenium Navigate To URL Hangs With Popup Alert on Safari

At the end of my tests Capybara automatically navigates to "about:blank" in order to set up the next test. Sometimes the application I'm testing will throw a popup alert if the user leaves the page (which is expected). I have some code to handle this:
begin
page.driver.browser.navigate.to("about:blank")
page.driver.browser.switch_to.alert.accept
rescue Selenium::WebDriver::Error::NoAlertPresentError
# No alert was present. Don't need to do anything
end
This works fine on Firefox, Chrome, and IE. But for some reason on Safari the navigate command hangs, I assume because of the popup. Anyone know a workaround for this?
There is no simple workaround for this at this time in any version of Selenium language bindings. It is a known issue the Selenium team is not interested in resolving. Fundamentally, it is due to the architecture of Safari and consequently the architecture of the Safari Driver.
The JavaScript of the Safari Driver extension does not know about most of the alerts and popups and dialogs that appear as modal Cocoa layer windows.
It also cannot interact with them.
There is a way but it won't be easy and nobody's done it.
You would need to use Cocoa.
So you would want to use RubyCocoa in this case.
(or PyObjC if you used Python)
You would then possibly also want a sidecar app actually written in Objective-C.
The trick would be to use the AX (Accessibility API) and a separate process to observe if there is an alert as the front window and poke at its labels and buttons' text as visible to the AX APIs.
AX APIs are probably exposed in RubyCocoa via the ScriptingBridge.
However, you would need to add your 'app' to the Security preference pane's list of things allowed to control the computer.
With that, you could detect the window and handle it.
It could be fairly brittle across web sites, but if built well, you could handle expected conditions.
You could try to confirm like this which I believe should work across browsers
# click ok to confirm
page.evaluate_script('window.confirm = function() { return true; }')

Apple's latest (2015) 'link to app store' directive causes unwanted Safari behaviour

I want to add a link from my app to another of my apps on the appstore.
Question How to link to apps on the app store showed that the itunes.apple.com link was,until recently, the normal way to go. I've tried this and everything is fine. The problem begins when I disgard this and use Apple's new recommendation of using appstore.com. I use the following line of code:
[UIApplication sharedApplication] openURL:[NSURL URLWithString:#http://appstore.com/myappname"]];
The first time I call this from my app it works well. You see it jump through Safari and move onto the appstore where it displays my app.
At this point if you look back into Safari you will notice a new blank tab labelled Favourites has been created.
If I go back to my app and perform the same action to link to the appstore again I'm prompted with one of the two popup boxes:
"Open this page in "App Store"? [Cancel] or [Open].
or
"Cannot Open Page. Safari cannot open the page because the address is invalid" [OK]
I've found that manually deleting the blank tab in Safari will allow the link to work properly but this behaviour isn't what I want my users to see- and I wouldn't be expecting them to delete the blank tabs from Safari.
Any advice on stopping this behaviour whilst following Apple's new rules greatly appreciated.
A simple and clean solution is to present an instance of SKStoreProductViewController inside your app (modally) to display information on the products you are interested in. The user can interact with it as a small view on the App Store and you can simply dismiss it when done.

Screen sharing with WebRTC?

We're exploring WebRTC but have seen conflicting information on what is possible and supported today.
With WebRTC, is it possible to recreate a screen sharing service similar to join.me or WebEx where:
You can share a portion of the screen
You can give control to the other party
No downloads are necessary
Is this possible today with any of the WebRTC browsers? How about Chrome on iOS?
The chrome.tabCapture API is available for Chrome apps and extensions.
This makes it possible to capture the visible area of the tab as a stream which can be used locally or shared via RTCPeerConnection's addStream().
For more information see the WebRTC Tab Content Capture proposal.
Screensharing was initially supported for 'normal' web pages using getUserMedia with the chromeMediaSource constraint – but this has been disallowed.
EDIT 1 April 2015: Edited now that screen sharing is only supported by Chrome in Chrome apps and extensions.
You guys probably know that screencapture (not tabCapture ) is avaliable in Chrome Canary (26+) , We just recently published a demo at; https://screensharing.azurewebsites.net
Note that you need to run it under https:// ,
video: {
mandatory: {
chromeMediaSource: 'screen'
}
You can also find an example here; https://html5-demos.appspot.com/static/getusermedia/screenshare.html
I know I am answering bit late, but hope it helps those who stumble upon the page if not the OP.
At this moment, both Firefox and Chrome support sharing entire screen or part of it( some application window which you can select) with the peers through WebRTC as a mediastream just like your camera/microphone feed, so no option to let other party take control of your desktop yet. Other that that, there another catch, your website has to be running on https mode and in both firefox and chrome the users are gonna have to install extensions.
You can give it a try in this Muaz Khan's Screen-sharing Demo, the page contains the required extensions too.
P. S: If you do not want to install extension to run the demo, in firefox ( no way to escape extensions in chrome), you just need to modify two flags,
go to about:config
set media.getusermedia.screensharing.enabled as true.
add *.webrtc-experiment.com to media.getusermedia.screensharing.allowed_domains flag.
refresh the demo page and click on share screen button.
To the best of my knowledge, it's not possible right now with any of the browsers, though the Google Chrome team has said that they're eventually intending to support this scenario (see the "Screensharing" bullet point on their roadmap); and I suspect that this means that eventually other browsers will follow, presumably with IE and Safari bringing up the tail. But all of that is probably out somewhere past February, which is when they're supposed to finalize the current WebRTC standard and ship production bits. (Hopefully Microsoft's last-minute spanner in the works doesn't screw that up.) It's possible that I've missed something recent, but I've been following the project pretty carefully, and I don't think screensharing has even made it into Chrome Canary yet, let alone dev/beta/prod. Opera is the only browser that has been keeping pace with Chrome on its WebRTC implementation (FireFox seems to be about six months behind), and I haven't seen anything from that team either about screensharing.
I've been told that there is one way to do it right now, which is to write your own webcamera driver, so that your local screen appeared to the WebRTC getUserMedia() API as just another video source. I don't know that anybody has done this - and of course, it would require installing the driver on the machine in question. By the time all is said and done, it would probably just be easier to use VNC or something along those lines.
navigator.mediaDevices.getDisplayMedia(constraint).then((stream)=>{
// todo...
})
now you can do that, but Safari is different from Chrome in audio.
it is Possible I have worked on this and built a Demo for Screen share. During this watcher can access your mouse and Keyboard. If he moves his mouse then Your mouse also moves and if he types from his Keyboard, it will be typed into your pc.
View this code this code is for Screen share...
Right now in this days you can share screen with this, you not need any extentions.
const getLocalScreenCaptureStream = async () => {
try {
const constraints = { video: { cursor: 'always' }, audio: false };
const screenCaptureStream = await navigator.mediaDevices.getDisplayMedia(constraints);
return screenCaptureStream;
} catch (error) {
console.error('failed to get local screen', error);
}
};

Safari Extension, Fluid App

I've written a simple extension for Safari that captures clicks on the RSS button in the address bar (calls to the feed:// protocol) and redirects to Google Reader instead of Safari's feed reader. If, however, the user has a Fluid app (one that opens Google Reader) set as their default feed reader the intercept doesn't work.
Is there any known way to capture a request that's being sent to a different app? The extension currently keys on document.beforeload(), but if the document is being opened in a new "app", it's never reached, of course.
Is there a different event I can catch? I haven't found a comprehensive list of events that extensions can catch.
Thanks.
Extensions are a part of Safari not webkit, so safari extensions aren't available in other apps that embed webkit (like Fluid).