Communication Between WebView and WebPage - Titanium Studio - titanium

I am working in a Mobile project (using Titanium Studio), in which i have the below situation
1) My Mobile app contacts Rails backend to check some data, say check validity of a
user id.
2) I found a way to load web pages in Mobile app, i.e., WebView
3) I could able to load the desired url, ex http://www.mydomain.com/checkuser?uid=20121
which would return data like status:success
But i need to read this data to show whether the response from server is a success or failure, how do i achieve this?
NOTE : The above mentioned scenario is an usecase, but actually what happens is i load a third party url in WebView and when user enters the data and submits, the result will be posted back to my website url.
EDIT : So the process is like below
1) WebView loaded with third party url like http://www.anyapiprovider.com/processdata
2) User will enter set of data in this web page and submits the page
3) The submitted data will be processed by the apiprovider and it returns data to my web page say http://www.mydomain.com/recievedata
This is the reason why i am not directly using GET using HTTPClient
FYI : I tried to fire Ti.APP events right from the actual web page as suggested by few articles, but most of them says this will work only if the file loaded is in local and not a remote file. Reference Link
Please suggest me if my approach has to be improved.
Thanks

If you don't want to follow Josiah's advice, then take a look at the Titanium docs on how to add a webview.addEventListener('load',... event listener and use webview.evalJS() to inject your own code into the third party HTML.
Maybe you can inject code to trap the submit event and fire a Ti event to trigger the downloading of data from your website.
Communication Between WebViews and Titanium - Remote Web Content Section

I found a solution for my problem
1) Load the http://www.mydomain.com/checkuser?uid=20121 in a webview
2) Let user enter and submit data to third party url
3) Recieve the response from third party url and print only <div id="result">status:success</div> in http://www.mydomain.com/recievedata page.
4) Add event listener for the web view as follows
webView.addEventListener('load', function(data)
{
//Add condition to check if the loaded web page has any div with id = result (to check if this is /recievedata page)
alert(webView.evalJS("document.getElementById('result').innerHTML"));
});
The above alert would print the result status:success, read it in webview load event
and take actions in web accordingly.
It works fine for me.

Instead of loading it in a WebView why not just GET it using a HTTP Client? This is much cleaner, and more standards based:
var xhr_get = Ti.Network.createHTTPClient({
onload : function(e) {
// Here is your "status:success" string
var returnValue = this.responseText;
},
onerror : function(e) {
Ti.API.info(this.responseText);
Ti.API.info('CheckUserProgressOnActivity webservice failed with message : ' + e.error);
}
});
xhr_get.open('GET', 'http://www.mydomain.com/checkuser?uid=20121');
xhr_get.send();

Related

node express multer fast-csv pug file upload

I trying to upload a file using pug, multer and express.
The pug form looks like this
form(method='POST' enctype="multipart/form-data")
div.form-group
input#uploaddata.form-control(type='file', name='uploaddata' )
br
button.btn.btn-primary(type='submit' name='uploaddata') Upload
The server code looks like this (taken out of context)
.post('/uploaddata', function(req, res, next) {
upload.single('uploaddata',function(err) {
if(err){
throw err;
} else {
res.json({success : "File upload sucessfully.", status : 200});
}
});
})
My issue is that while the file uploads successfully, the success message is not shown on the same page, ie: a new page is loaded showing
{success : "File upload sucessfully.", status : 200}
As an example for other elements (link clicks) the message is displayed via such javascript:
$("#importdata").on('click', function(){
$.get( "/import", function( data ) {
$("#message").show().html(data['success']);
});
});
I tried doing a pure javascript in order to workaround the default form behaviour but no luck.
Your issue has to do with mixing form submissions and AJAX concepts. To be specific, you are submitting a form then returning a value appropriate to an AJAX API. You need to choose one or the other for this to work properly.
If you choose to submit this as a form you can't use res.json, you need to switch to res.render or res.redirect instead to render the page again. You are seeing exactly what you are telling node/express to do with res.json - JSON output. Rendering or redirecting is what you want to do here.
Here is the MDN primer on forms and also a tutorial specific to express.js.
Alternatively, if you choose to handle this with an AJAX API, you need to use jquery, fetch, axios, or similar in the browser to send the request and handle the response. This won't cause the page to reload, but you do need to handle the response somehow and modify the page, otherwise the user will just sit there wondering what has happened.
MDN has a great primer on AJAX that will help you get started there. If you are going down this path also make sure you read up on restful API design.
Neither one is inherently a better strategy, both methods are used in large-scale production applications. However, you do need to choose one or the other and not mix them as you have above.

How to communicate PHP with PhantomJS for solving Captchas

The problem I'd like to solve is the following:
I need to manage an external web page through PHP, for example, login and then change the profile info on the external web after sending an ajax request on my own web.
For this, I'm calling PhantomJS from PHP to do those tasks, but before login to the external web I need to fill the captcha input. So, I'd like to send back the Captcha image to my web, write the correct code and send it back to the WebPage module of PhantomJS to login using that code.
In other words, I need a 'syncronous' program like this:
1) PHP -> Send a request to login and obtain the captcha image.
2) PhantomJS -> Open a WebPage instance and render the captcha code to an image.
3) PHP -> Get the captcha image, show it to an user and send a text input to PhantomJS.
4) PhantomJS -> Get the text code from PHP, fill the captcha input using 'page.evaluate' and login. Send to PHP some data ('Login successfull', 'Login failed', etc)
5) PHP -> Get the callback and send another task or data.
callback = 'Login successfull' --> Change profile picture or update user info.
callback = 'Login failed' --> Try to login again (like point 1)
Etc...
There are many things I don't know how to handle. For example:
1) How could I keep the WebPage module open and waiting for the text code of the captcha? If I close it, a new captcha code will appear next time, and I need a way to wait the code and get it. Do I need to start a server for this?
2)Get the captcha image from PHP isn't a problem (because of 'page.render'), but how I could send a text back to the WebPage instance of PhantomJS? I think is better to send data bidirectionally between both systems. Again, do I need a server?
I think I need a socket server in PhantomJS (how can this be done?). This server should have the WebPage instance that I need to keep open, but I'm not completely sure about this.
Thanks.
I recently published a project that gives PHP access to a browser. Get it here: https://github.com/merlinthemagic/MTS, Under the hood is an instance of PhantomJS.
The main issue is keeping a resource alive after initial execution. Here is how i propose you do it.
After downloading and setup you would simply use the following code:
Start of "Setup" session:
if (isset($_POST['sessionUID']) === false) {
//set the execution timeout long enough to cover the entire process (setup and working time), it dictates when phantomJS shuts down automatically.
ini_set('max_execution_time', 300);
//open the login page:
$myUrl = "http://www.example.com";
$browserObj = \MTS\Factories::getDevices()->getLocalHost()->getBrowser('phantomjs');
//allow the page to live after php shuts down.
$browserObj->setKeepalive(true);
$windowObj = $browserObj->getNewWindow($myUrl);
//find the username input box, here it has id=username
$windowObj->mouseEventOnElement("[id=username]", 'leftclick');
//type your username
$windowObj->sendKeyPresses("yourUsername");
//find the password input box, here it has id=passwd
$windowObj->mouseEventOnElement("[id=passwd]", 'leftclick');
//type your password
$windowObj->sendKeyPresses("yourPassword");
//click on the login button, here it has id=login
$windowObj->mouseEventOnElement("[id=login]", 'leftclick');
//i assume this is when you encounter the CAPTCHA image
//find the CAPTCHA image element, here it has id=captchaImage
$element = $windowObj->getElement("[id=captchaImage]");
$loc = $element['location'];
//tell the screenshot to only get the CAPTCHA image
$windowObj->setRasterSize($loc['top'], $loc['left'], ($loc['right'] - $loc['left']), ($loc['bottom'] - $loc['top']));
$imageData = $windowObj->screenshot("png");
$sessionUID = uniqid();
$saveWindowObj = serialize($windowObj);
//save the window object so we can pick it up again
file_put_contents("/tmp/" . $sessionUID, $saveWindowObj);
}
//now render the CAPTCHA image to the user as part of a form they can resubmit and make sure to keep the $sessionUId as a hidden variable in the form on the page
End of the "Setup" session, php shuts down here.
Start of "Working" session:
We assume the user submits the form and it is a post containing the $sessionUID and the text string for CAPTCHA.
if (isset($_POST['sessionUID']) === true && isset($_POST['captchaTxt']) === true) {
$savedWindow = file_get_contents("/tmp/" . $sessionUID);
//delete the saved object
unlink("/tmp/" . $sessionUID);
//bring back the object to life
$windowObj = unserialize($savedWindow);
//make sure the browser is now shutdown on exit
$windowObj->getBrowser()->setKeepalive(false);
//find the CAPTCHA input box, here it has id=captchaInput
$windowObj->mouseEventOnElement("[id=captchaInput]", 'leftclick');
//type the CAPTCHA string
$windowObj->sendKeyPresses($_POST['captchaTxt']);
//click on the button to accept CAPTCHA, here it has id=captchaOK
$windowObj->mouseEventOnElement("[id=captchaOK]", 'leftclick');
//now use the clickElement() etc functions on $windowObj to do what you need to do.
}
End of the "Working" session, php shuts down here.

SignalR inside _Layout.cshtml - persist connection

I decided to use SignalR for chat on my page. Chat page is opened when user clicks on "Enter Chat" link which is placed inside _Layout.cshtml. This works fine. However, what I would like to achieve is the following functionality:
On the left side of the page I would like to have some kind of
"online users" area and when one user logins, other users whose are
already logged in will be able to see that a new user just enters the
page.
Users who are online can chat with each other by simply
clicking on their names
I am using the following code to connect to the chat application:
$(function () {
//declare a proxy to reference the hub
var chatHub = $.connection.chatHub;
registerClientMethods(chatHub);
//Start Hub
$.connection.hub.start().done(function () {
registerEvents(chatHub);
chatHub.server.connect(#User.Identity.Name);
});
});
However when I place this code inside my _Layout.cshtml page, users are permanently logged off and connected again each time they navigate through pages (they are intended to be opened inside _Layout.cshtml).
Is there any way for persisting connection with the hub when navigating through page? What is the best practices when using this kind of functionality?
Whenever you navigate away from a page or in any way refresh the contents of the page you will need to start a fresh SignalR connection. There are two ways to handle this behavior when navigating through pages:
Create a single page application.
Handle users connecting/disconnecting on the server in such a way that they're not truly logged out until they leave the site.
Now to dive into a little more detail on #2. Users on your site may disconnect/connect every time they transition to a new page but you can control how they log out or appear disconnected via your server side logic. You can get this functionality by keeping a set of "online" users in your server side code and then only considering them offline after a specified timeout value.

facebook dialog to add facebook tab app - returns - tabs_added[pageId]=1

It seems the new dialog to tab app to a page -> https://developers.facebook.com/docs/reference/dialogs/add_to_page/ - calls the app url with a GET (redirect_uri?tabs_added[nnnnn]=1) (where nnnn - pageId of page the app is being added to)
I can't find the documentation around whether when the app is removed from the page, the same url will be called with a GET (redirect_uri?tabs_added[nnnnn]=0) ?
I am keen to process the uninstall of the app from the page, if possible. (I have tried to test this, but don't get a trigger to my redirect_uri upon an installed, unlike the one that is called upon an install..)
My question is - whether there is a way to get a delete page callback into the app (when a page uninstalls/deletes the app from the page) ? From the syntax on install GET call (?tabs_added[nnn]=1, it seems that this might have been designed with an intention to call a GET with ?tabs_removed[nnnn]=1 or tabs_added[nnnn]=0 when the app is deleted from the page ?
Empirically, the answer to your question is No. Nothing on my server gets called by Facebook when the Page Tab is removed.
Go to the advanced tab on the facebook app settings, and put a URL of your choice into the 'Deauthorize Callback URL' field. You will receive a callback on that and you need to parse the signed request.
Example in php:
$helper = $fb->getPageTabHelper();
$signedRequest = $helper->getSignedRequest();
if ($signedRequest) {
$payload = $signedRequest->getPayload();
//trace(print_r($payload, true));
$pageId = $payload['profile_id'];
//You can now update your records using $pageId
}

Prevent offline iphone webapp from opening link in Safari

I’m developing a website that will work with mobile safari in offline mode. I'm able to bookmark it to the home screen and load it from there. But, once opened from the home screen, clicking on certain links will jump out of the app and open in mobile safari – despite the fact that I preventDefault() on all link clicks!
The app binds an onclick event handler at the <body> level. Using event delegation, it catches any click on any link, looks at its href (eg 'help' or 'review'), and dynamically calls a javascript template and update the pages. The event handler calls preventDefault() on the event object – for some of the links this works, and the page is updated with the template output. However, for the links that result in a hit against the local database before outputting the results of the template, the links are opened in mobile safari.
In desktop safari, all the links work even when i’m offline – something is happening that’s mobile safari specific.
Any thoughts on why some links would work offline, but not others? None of the link URLs in question are listed in the manifest file, but they don’t (shouldn't) need to be since the link action is prevented.
a couple extra oddities:
* once I click on a a link that loads in mobile safari, even if I'm offline, those same links now work, and the templates populated with data from the db work properly. in other words: the links fail when opened from the home screen, but not from within mobile safari offline
* changing the link to remove the database hit (populating the template with a mock db result) solves the problem, and the links can be clicked in the app from the home screen.
You may want to take a look at this: https://gist.github.com/1042026
// by https://github.com/irae
(function(document,navigator,standalone) {
// prevents links from apps from oppening in mobile safari
// this javascript must be the first script in your <head>
if ((standalone in navigator) && navigator[standalone]) {
var curnode, location=document.location, stop=/^(a|html)$/i;
document.addEventListener('click', function(e) {
curnode=e.target;
while (!(stop).test(curnode.nodeName)) {
curnode=curnode.parentNode;
}
// Condidions to do this only on links to your own app
// if you want all links, use if('href' in curnode) instead.
if('href' in curnode && ( curnode.href.indexOf('http') || ~curnode.href.indexOf(location.host) ) ) {
e.preventDefault();
location.href = curnode.href;
}
},false);
}
})(document,window.navigator,'standalone');
I got it to work, the problem was due to an unseen error in the event handler code (unrelated to stopping the link from being followed). If you bind an event handler for click events to the body tag, and call preventDefault(), then the link will not be followed and mobile safari will not open, and you can define you own logic for updating the page based on that link url.
You should be sure that you call preventDefault() before any errors could possibly occur - the problem in my case was that an error was occurring in the event handler before preventDefault() was called, but of course I couldn't see that error in the console because the link had already been followed.
Here's the code I'm using (it assumes DOM standard events and would fail in IE):
bodyOnClickHandler = function(e) {
var target = e.target;
if (target.tagName == 'A') {
e.preventDefault();
var targetUrl = target.getAttribute("href");
//show the page for targetUrl
}
}