Use node-webkit to remote control an iframe? - node-webkit

I'm trying to automate a work flow where we have to log in to a website, navigate, get redirected several times and finally have to upload a file into a reporting system.
After failing with phantomjs/casparjs (where we also do not really get visual feedback) I was thinking about using node-webkit.
So basically, what I am trying to do is writing a "controller" that is opening another webpage in an iframe and then manipulating the fields, hitting buttons, ...
Is this something that can be done? If yes, I am struggeling to get a handle on the fields to fill them...
Or is this a classic "wrong tool" approach and we shouldn't be doing that?
Something along the line of
var new_win = gui.Window.get(
window.open('https://remote/login/site/')
);
gui.Window.get(new_win).on('loaded', function () {
//all of this doesn't really work but might help you to understand what I try to do
//window.console.log(new_win.window.document.getElementById("user"));
//window.eval(new_win, "code_to_fill_the_user_field");
//var userField = new_win.window.document.getElementById("user");
//console.log(userField);
});
Update: 2014-08-02:
I understand now that webkit is intended for creating desktop apps with HTML5 and not remote controlling websites, so we forget about this question.
I did solve the problem with phantomjs/casperjs now, BTW.

I understand now that node-webkit is intended for creating desktop apps with HTML5 and not remote controlling websites, so we forget about this question. I did manage to solve the problem with phantomjs/casperjs now.

Related

Show a single thumbnail when posting on facebook

A little background info is that my team and I developed a website for a Real Estate Agency and I've been assigned the task of setting the image of the currently selected property into facebook's sharing feature.
The webpage for the property is dynamic as there are several listings, so what I've done is select the first image that is loaded on the page and set it to the og:image meta tag.
Now let's say I copy the URL and post it on Facebook, it'll show the correct thumbnail, HOWEVER, it'll also show multiple thumbnails from other listings.
All images on the website are over 200 x 200px and are within an aspect ratio of 3:1.
My question is, how do I tell Facebook to only take my initial image and not grab others while it's as it.
Is there perhaps a SelectSingleImage property that I can apply?
I've already spent more time searching for the answer to this issue than I would have liked, so thanks for any help provided, it's much appreciated.
One method I use sometimes is to recognize Facebook's server and simply provide it with different data. This way you can actually only have one image on the page (as far as Facebook knows).
I don't know anything about vb.net, but here is a simple code sample in PHP. All it does is perform a regular expression on the user agent of the request to match it against the string "facebook".
$isFacebook = false;
if(preg_match("/facebook/",strtolower($_SERVER["HTTP_USER_AGENT"]))) {
$isFacebook = true;
}
Facebook may very well change their user agent signature one day, but for now, I'm pretty sure you'll be safe but keep synced with the Developers Blog and the Roadmap.
It seems that Facebook saved those images in cache for some bizarre reason, but to resolve this issue all I had to do was enter the URL into Facebook's Linter tool which in turn cleared the cache on their server.

Screen sharing with WebRTC?

We're exploring WebRTC but have seen conflicting information on what is possible and supported today.
With WebRTC, is it possible to recreate a screen sharing service similar to join.me or WebEx where:
You can share a portion of the screen
You can give control to the other party
No downloads are necessary
Is this possible today with any of the WebRTC browsers? How about Chrome on iOS?
The chrome.tabCapture API is available for Chrome apps and extensions.
This makes it possible to capture the visible area of the tab as a stream which can be used locally or shared via RTCPeerConnection's addStream().
For more information see the WebRTC Tab Content Capture proposal.
Screensharing was initially supported for 'normal' web pages using getUserMedia with the chromeMediaSource constraint – but this has been disallowed.
EDIT 1 April 2015: Edited now that screen sharing is only supported by Chrome in Chrome apps and extensions.
You guys probably know that screencapture (not tabCapture ) is avaliable in Chrome Canary (26+) , We just recently published a demo at; https://screensharing.azurewebsites.net
Note that you need to run it under https:// ,
video: {
mandatory: {
chromeMediaSource: 'screen'
}
You can also find an example here; https://html5-demos.appspot.com/static/getusermedia/screenshare.html
I know I am answering bit late, but hope it helps those who stumble upon the page if not the OP.
At this moment, both Firefox and Chrome support sharing entire screen or part of it( some application window which you can select) with the peers through WebRTC as a mediastream just like your camera/microphone feed, so no option to let other party take control of your desktop yet. Other that that, there another catch, your website has to be running on https mode and in both firefox and chrome the users are gonna have to install extensions.
You can give it a try in this Muaz Khan's Screen-sharing Demo, the page contains the required extensions too.
P. S: If you do not want to install extension to run the demo, in firefox ( no way to escape extensions in chrome), you just need to modify two flags,
go to about:config
set media.getusermedia.screensharing.enabled as true.
add *.webrtc-experiment.com to media.getusermedia.screensharing.allowed_domains flag.
refresh the demo page and click on share screen button.
To the best of my knowledge, it's not possible right now with any of the browsers, though the Google Chrome team has said that they're eventually intending to support this scenario (see the "Screensharing" bullet point on their roadmap); and I suspect that this means that eventually other browsers will follow, presumably with IE and Safari bringing up the tail. But all of that is probably out somewhere past February, which is when they're supposed to finalize the current WebRTC standard and ship production bits. (Hopefully Microsoft's last-minute spanner in the works doesn't screw that up.) It's possible that I've missed something recent, but I've been following the project pretty carefully, and I don't think screensharing has even made it into Chrome Canary yet, let alone dev/beta/prod. Opera is the only browser that has been keeping pace with Chrome on its WebRTC implementation (FireFox seems to be about six months behind), and I haven't seen anything from that team either about screensharing.
I've been told that there is one way to do it right now, which is to write your own webcamera driver, so that your local screen appeared to the WebRTC getUserMedia() API as just another video source. I don't know that anybody has done this - and of course, it would require installing the driver on the machine in question. By the time all is said and done, it would probably just be easier to use VNC or something along those lines.
navigator.mediaDevices.getDisplayMedia(constraint).then((stream)=>{
// todo...
})
now you can do that, but Safari is different from Chrome in audio.
it is Possible I have worked on this and built a Demo for Screen share. During this watcher can access your mouse and Keyboard. If he moves his mouse then Your mouse also moves and if he types from his Keyboard, it will be typed into your pc.
View this code this code is for Screen share...
Right now in this days you can share screen with this, you not need any extentions.
const getLocalScreenCaptureStream = async () => {
try {
const constraints = { video: { cursor: 'always' }, audio: false };
const screenCaptureStream = await navigator.mediaDevices.getDisplayMedia(constraints);
return screenCaptureStream;
} catch (error) {
console.error('failed to get local screen', error);
}
};

Forced to switch from iframe to SDK for share buttons

I was using an iframe share button solution to be able to share products or posts on facebook. I used this systeme wich worked great on about 10 different websites but this week, they all ended up no working.
I read I should get an appId for each website and use Asynchroneous SDK, which I did, and followed steps to get it to work:
Load the SDK with appId authentification.
Load jQuery if needed (some buttons require it).
Add button code were desired.
I can see the SDK is loaded in <div id="fb-root"></div> but the share button never appears on the page and it is not a layout issue. I have tried several different buttons but they all seam to never make it to the user.
I read a lot of posts about the issue but each one was magically solved on Jully 22nd... not mine. I need help implementing this first button as I need to fix many websites afterwards. Thx!
When doing a fresh implementation of FB btns I ended up going here:
https://developers.facebook.com/tools/debug
to force FB to crowl the pages I was working on at the moment.
This proved usedful in getting immediate feedback on how things were working out.

Post Screenshot of APP to wall

How would I make a button that when clicked the user of the app will be posting a screenshot of the canvas page they are on and have it posted to their wall? And maybe add some comments with it as well.
you would have to require the user to add some additional plugin to their browser. There are many plugins that do this - for example clipboard.com provides this functionality.
To the best of my knowledge, these extensions actually make a request themselves (in the back-end) to the url you are currently on and takes the screenshot from that url on the actual server. From there it can be shared and commented on.
However with out some external utility I don't think this functionality is possible.
hope this helps...

Getting DOM from page using Chromium/WebKit

Trying to get access to a page's DOM after rendering. I do not need to view the page and plan to apply this programmatically without any GUI or interaction.
The reason I am interested in post-rendering is that I want to know where objects appear. Some location information is coded in the HTML (e.g., via offsetLeft), but much is not. Also, Javascript can change the ultimate positioning. I want positions that are as close to what the user will see as possible.
I've looked into Chromium code and think there is a way to do this but there is not enough documentation to get started.
Putting it VERY simply I'd be interested in pseudo-code like this:
DOMRoot *r = new Page("http://stackoverflow.com")->getDom();
Any tips on starting points?
You should use the Web API wrapper that Chromium exposes; specifically, the WebDocument class contains the functionality that you need. You can call it like this:
WebFrame * mainFrame = webView->mainFrame();
WebDocument document = mainFrame->document();
WebElement docElement = document->docElement();
// Manipulate the DOM here using docElement
...
You can browse the source code for Chromium's Web API wrapper here. Although there's not much in the way of documentation, the header files are fairly well-commented and you can browse Chrome's source code to see the API in action.
It's difficult to get started using Chromium. I recommend looking at the test_shell application. Also, a framework like the Chromium Embedded Framework (CEF) simplifies the process of embedding Chromium in your application; I use CEF in my current project and I'm very satisfied with it.