I'm using Selenium (python) to automate some tests on websites. Because selenium's API is quite limited, I'm using a web extension to perform advanced javascript tests.
What would be the proper way to communicate results from the webextension back to the python script? So far, I'm passing them trough console.log messages, but it fails if the target site overrides console.log() (and it seems quite hack-y anyway).
I'd probably tackle this the following way:
Firstly, if you have control over the web extension's source code, then I'd add a simple method which serializes your data into a nice format, then stores it into the browser's local storage.
!Note: If you haven't worked with this, don't worry, there are multiple examples online. You have to keep into consideration that you're also limited to a 5-10 MB local storage limit for your data (varying across browsers).
Secondly, you'd have to read the localStorage values. I see two ways to do this:
make use of your underlying Selenium-based framework's API (usually all of them have some sort of localStorage/cookies API call). For example: in most frameworks you can use the .execute() command (or executeScript) to set the local storage value: browser.execute('localStorage.setItem('PerduData', <dataObject>);');
use plain JavaScript inside your scripts to set the same local storage value
I'm sure there are multiple ways to achieve the same outcome. If you're skilled at working with files, you can also consider storing the data object on other browser, or machine (OS) storage areas.
Lastly, I think the most elegant way to achieve this is by using some third-party storage which has a publicly exposed API that can ultimately accessed by both webextension & Selenium script.
Hope this helps!
Related
My aim is to select some text from a web page, start a google chrome extension and give the text to a google cloud api (Natural Language API) in my case.
I want to do some sentimental analysis and then get back the result to mark/ highlight positive sentences in green and negative ones in red.
I am new to this and do not know how to start.
The extension consists of manifest, popup etc. How should I call an API from there that does Natural Language Processing?
Should I create a Google Cloud Application with an API_KEY to call? In that case I would have to upload my credentials right?
Sorry sounds a bit confusing I know but I just don't know how I can bring this 2 things together an would be more than happy about any help
The best way to authenticate your app will depend on the specific needs and use cases of your application. You can see an overview of all the different methods here.
If you are not planning on identifying users nor on using a back end server that handles authenticating (as I assume to be your case), the best option would indeed be to use API keys. They do not identify the user, but are enough for the Natural Language APIs.
To do this you will need to create an API key for the services you want and add the necessary restrictions to make the key as secure as possible. Detailed instructions on how to do this and how to use the key in a url can be found here.
The API call could be made from within the Chrome extension with any JavaScript method capable of performing POST requests. For example using XMLHttpRequest or the Fetch API. You can find an example of the parameters that need to be included in the request here.
You may run into CORS issues when making the request directly from the extension. I recommend reading this answer, where a couple of workarounds for these issues are suggested.
For one of our clients we are building a web application with oracle adf.
One of the requested features of this application is having a drag-and-drop file upload.
Fortunately the af:inputfile component supports this feature out of the box.
Unfortunately that feature is not supported in Internet Explorer 11, which we absolutely have to support.
Now I have been trying to get it to work using the dropzone.js library and the drag and drop functionality seems to be working. but I haven't been able to get the POST request to the ADF side of things quite right.
Even if I did it would be a lot of custom code that would have to be maintained, if it's the only way to make it work that is fine but if there is a more elegant solution to this I would like to know.
What you can do is to use ADF JavaScript APIs, more specifically AdfFileUploadManager(https://docs.oracle.com/middleware/1213/adf/api-reference-javascript-faces/oracle/adf/view/js/util/AdfFileUploadManager.html)
You need to instantiate it by giving it the ADF Component reference, which that can be an af:inputFile with display="none"
Then you can utilise your DropZone or any other functionality and use addFileToQueue to send this information to server side and convert it to UploadedFile.
I was wondering how the services like http://www.inspectlet.com/ does store the video sessions. By the looks I don't think it's a webRTC implementation. What i was able to figure out that there is active express socket which is making communication but in that case they will have to store the page and track all the events from DOM. Just wanted to confirm that this is the approach they are following.
Looking at the event listeners on the page, it looks like there are a lot of bindings. For example, the <body> has scroll, keyup, and change events bound to a function. I'm sure it also has mousemove, mouseclick, etc. All of this is likely stored in a Javascript variable (object, probably) and then AJAXed every so often to http://hn.inspectlet.com/adddata with the data parameters. Here is a sample of what is being sent:
http://hn.inspectlet.com/adddata?d=mr,212941,46,337,46,1277)mr,213248,163,498,163,1438)mr,213560,144,567,144,1507)mr,213873,138,240,138,1180)mr,214188,169,184,169,1124)mr,214504,158,520,158,1460)mr,214816,231,487,231,1427)mr,215130,329,197,329,1137)mr,215444,894,289,894,1229)mr,215755,903,295,903,735)s,215755,440,0)&w=259769975&r=494850609&sd=1707&sid=1660474937&pad=3&dn=dn&fadd=false&oid=99731212&lpt=212629
As suggested in Adam's answer, they track many events in the page and send them either via a websocket or post/get request (XHR) to their servers.
I am not sure what inspectlet does specifically, but in general, such a solution will need to follow the below general steps:
When the page is fully loaded (hook on DOMContentLoaded probably) they will send the page data to the server. Then they also hook on MutationObserver in order to track all changes to the DOM in the page, so when something changes dynamically, the tracking script can 'record' it and send it to the server.
The SaaS application in turn, will have a player that will parse all this raw data and then play it back, this will usually require using some virtual file system for assests (images, css, scripts) and handling js scripts to not post back again (replay xhr will have bad results for tracked websites) but play back the mutations as they were captured (recorded)
In regards of data HTML pages compress really well, especially when you can make assumptions about the data (since you know its html) - so yes, they actually cache a lot (similar to what google does in that regards, but google does it for the entire web, not just 'customers')
The DOM Mutations will need to be stored as well. This is up to the algorithm, it can either be stored as plain text or using a smarter data model, storing them in plain text is obviously not the cost effective solution.
The above is an abstraction, there are many edge cases to handle in order to implement such a solution as well as a lot of mathematical and algorithmic thinking in regards of heatmaps to make them accurate.
So after a long search was able to find a new promising solution on the block, which solves all the complex parts in building such service. It is still under development but it solves the problem. Below is the link to it,
https://www.rrweb.io/
https://github.com/rrweb-io/rrweb
I am planning to use node webkit for porting my existing html/css/javascript from a web app to desktop native app.Before doing this, i was trying to see if there are any downsides of using node webkit.
Which is the best database supported by nodewkit
My understanding is that it does not require any browser to run this node webkit app and that it provides webkit engine and the app provides a UI to it by using html5, and css.Is this understanding of mine correct?
All your pointers will be helpful.
Thanks!!!
Yes that is correct. Node webkit works as an HTML5/Node.js application wrapped around simple browser app written on Chromium Engine, and it doesn't need anything installed to work.
As far as I understand You want to connect to remote database, not create a local one for user data. If that's true, You shouldn't implement it on client side, but on server side. Which means your server side implementation shouldn't differ from Your actual one.
I create some applications with the following struct :
- folder : server (for any php/db request or response)
- folder : client (for js/css and images)
the relation beetween the client and the server is AJAX.
and the best db for a local use I prefer : Sqlite (in a network MongoDB or MySQL)
in all the cases it's preferably to use an ORM like Doctrine.
and make sure that in the response of server you send allways a json (not a formatted divs or any html) the client must be able to organise his data him self.
as a sample : openerp use this structure (with python istead of php).
for use of sqlite here is link that explane the way:
http://tejasrpatel.wordpress.com/2011/12/29/create-sqlite-off-line-database-and-insertupdatedeletedrop-operations-in-sqlite-using-jquery-html5-inputs/
that's the advices from my experience. and hope this be helpfull for someone.
I am currently developing a node-webkit app myself as well. I had this same question so as I was looking around I found PouchDB. This looks very promising for a node-webkit environment so that's what I'm going to be using. I hope this helps you out as well.
Actually there is one major part of node-webkit that you don't mention in your question, that is the 'node' part, specifically node.js. This is important because just about anything you can do in node.js is available to you in node-webkit.
I don't know what your application does, so I can't tell for sure, but you may or may not need an actual database. If all you want to do is store some data, you may find a file to be sufficient (JSON or whatever format is convenient for your purpose) which is easy to do with the fs module. Or you might only need to use localStorage, which is also available in the 'webkit' part of node-webkit.
If you do actually need a database, then anything works with node.js should be available to you, such as the aforementioned pouchdb, or any number of other possibilities.
In any case, you don't need to set it up as a client server model if you don't want to, you can just access files or databases directly in your node.js code. Conversely, if you do want to do a client-server model, you can have both running locally within your node-webkit program.
Hope that helps.
I use node-webkit insted of Qt, PySide, etc. and I tell you why:
webkit (the most full feature browser)
nodejs (what can I say is javascript)
cross platform (almost all)
To package and distribuite the software I use Web2Executable
For the GUI I use ExtJs 4.2 from Sencha
Database (engine) I use NeDB but you can use internal engines from webkit found here persistent-data-in-app
I am somehow familiar with benchmarking/stress testing traditional web application and I find it relatively easy to start estimating maximum load for it. With tools I am familiar (Apache ab, Apache JMeter) I can get a rough estimate of the number of request/second a server with a standard app can handle. I can come up with user story, create a list of page I would like to check and benchmark them separately. A lot of information can be found on the internet how to go from novice like me to a master.
But in my opinion a lot of things is different when benchmarking single page application. The main entry point is the most expensive request, because the user loads majority of things needed for proper app experience (or at least in my app it is this way). After it navigation to other places is just ajax request, waiting for json, templating. So time to window load is not important anymore.
To add problems to this, I was not able to find any resources how people do it properly.
In my particular case I have a SPA written with knockout, and sitting on apache server (most probably this is irrelevant). I would like to have rough estimate how many users can my app on the particular server handle. I am not looking for a tool recommendation (also it would be nice), I am looking for experienced person to share his insight about benchmarking process.
I suggest you to test this application just like you would test any other web application, as you said - identify the common use cases, prepare scripts for them, run in the appropriate mix and analyze the results.
Web-applications can break in many different ways for different reasons. You are speculating that the first page load is heavy and the rest is just small ajax. From experience I can tell you that this is sometimes misleading - for example, you can find that the heavy page is coming from cache and the server is not working hard for it, but a small ajax response requires a lot of computing power or long running database query or has some locking in the code that cause it to break or be slow under load - that's why we do load testing.
You can do this with any load testing tool, ideally one that can handle those types of script with many dynamic values. My personal preference is WebLOAD by RadView
I am dealing with similar scenario, SPA application where first page loads and there after everything is done by just requesting for other html pages and/or web service calls to get the data.
My goal is to stress test the web server and DB server.
My solution is to just create request for those html pages(very low performance issue, IMO, since they are static and they can be cached for a very long time in the browser) and web service call requests. Biggest load will come from the request for data/processing via the web service call requests.
Capture all the requests for html and web service calls using a tool like fiddler, and use any load test tools(like JMeter) to run these requests using as many virtual users as you want to test your application with.