I'm doing e2e tests with the help of phantom and protractor, however each time
I do a change on my code (not e2e test code),it seems that phantom was not
taking the new changes or cleaning localStorage and SessionStorage. I think it could be cache but I am not sure.
I tried to add the next sentence on my protractor.config file in order to clean LocalStorage.
onPrepare: function () { browser.executeScript("return window.localStorage.clear();"); }
However , It didn't work , instead of that I got this error
var template = new Error(this.message); ^ UnknownError:
{"errorMessage":"SecurityError: DOM Exception
18","request":{"headers":{"Accept-Encoding":"gzip,deflate","Connection":"Keep-Alive","Content-Length":"58","Content-Type":"application/json;
charset=utf-8","Host":"localhost:9781","User-Agent":"Apac
he-HttpClient/4.5.1
(Java/1.7.0_79)"},"httpVersion":"1.1","method":"POST","post":"{\"script\":\"return
window.localStorage.clear();\",\"args\":[]}","url":"/execute","urlParsed":{"anchor":"","query":"","file":"execute","directory":"/","path":"/execute","relative
":"/execute","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/execute","queryKey":{},"chunks":["execute"]},"urlOriginal":"/session/dc9ce280-e15f-11e5-911a-4b92e3de49f0/execute"}}
Build info: version: '2.51.0', revision: '1af067d', time: '2016-02-05
19:15:17' Driver info: driver.version: unknown
Like you, I've also seen plenty advice to call window.localStorage.clear() to clear local storage in PhantomJS. But here's the deal -- if you're doing testing on a remote domain, this code will raise a security exception, because browsers (should not) allow clearing local storage data across domains.
What you can do however, is control where the data file where local storage is written to on your testing machine is located. That's going to be --local-storage-path (command line argument) or (in C#, Java similar):
PhantomJSDriverService service =
PhantomJSDriverService.CreateDefaultService();
//IMPORTANT: avoid the common pitfall of trying to specify a file name!!
// provide the path to the folder only
service.LocalStoragePath = #"path\to\where\I\want\local-storage\saved;
var driver = new PhantomJSDriver(service);
Local storage files are saved with the name [domain].localstorage. Before you run each test, check for the existence of the file *.localstorage in the folder you specified, and delete it. That will clear local storage for you.
Related
I am using Selenium with Robot Framework to run an Electron application. The application is built to read a configuration file from the user data directory. As far as I understand, this is the location where this file should be stored.
The electron main process reads the configuration file:
const localConfigFile = path.join(app.getPath('userData'), 'config.json');
const localConfig = fs.existsSync(localConfigFile) ? require(localConfigFile) : {};
The built production version works just fine and reads the file as expected, but when starting it from Robot using SeleniumLibrary, the file is not read. This leads me to believe it's a problem with Robot, Selenium or ChromeDriver.
Robot creates the webdriver using SeleniumLibrary:
Create Webdriver Remote desired_capabilities=${starting_parameters} command_executor=http://127.0.0.1:9515
Where starting parameters are simply:
{ "chromeOptions": {"binary": <binary_location> }}
Chromedriver is started as a separate process from /usr/bin/chromedriver where it has been installed and uses the default port 9515.
The versions that I am using are:
ChromeDriver 2.36.540471 (9c759b81a907e70363c6312294d30b6ccccc2752)
"electron": "^6.0.2"
"electron-builder": "^21.2.0"
robotframework==3.2.1
robotframework-seleniumlibrary==3.3.1
Ubuntu 18.04.4 LTS
Experienced the same problem. Finally changing the Chromedriver's user-data-dir argument to Electron's userDataDir solved the problem.
options.add_argument('--user-data-dir=' + str(app_config_path()))
I found out the problem by using Spectron and logging the Electron main console. In general I would advice using Spectron instead of Robot to test Electron apps.
The issue is that the user data directory was not the same as the Electron default. When running the application through Chromedriver, the user data directory is changed to /tmp/somethingsomething so naturally the file under ~/.config/app-name was not found.
My solution was to use Electron'ss application data directory instead:
app.getPath('appData')
By default this is one directory up from the user data directory, but remains the same when running through Chromedriver.
I am trying to use testcafe-browser-provider-saucelabs.
My tests can successfully connect to SauceLabs and run there, but testcafe creates a unique sauceconnect tunnel, whereas I need to use a shared tunnel. Also, screenResolution is not being picked up from sauceLabsConfig.json file.
I have saucelabs credentials set as environment variables.
I am launching tests using these commands:
export SAUCE_JOB="Regression Job"
export SAUCE_BUILD="Build 1"
export SAUCE_CONFIG_PATH="./sauceLabsConfig.json"
testcafe saucelabs:chrome tests/
I created a sauce config JSON file:
{
"parentTunnel": "PARENT_TUNNEL",
"tunnelIdentifier": "qa",
"screenResolution": "1920x1080"
}
Why is my SAUCE_CONFIG_PATH variable not working?
At present, not all SauceLabs options are supported for the 'testcafe-browser-provider-saucelabs'. For example, the tunnelIdentifier option is not supported. I've created an issue in the browser provider repository. Track it be informed about the progress.
Note that this issue seems to be fixed, per this pull request:
https://github.com/DevExpress/saucelabs-connector/pull/33
...and integrated in to testcafe 1.14.0:
https://github.com/DevExpress/testcafe/tree/v1.14.0
There is a javascript library, pre-built and available on npm, that I wish to develop with/debug. In my case, it is openlayers.
In the classic way of requiring a javascript file and wanting to debug, one would just switch the script url from the production version to the debug version, ie:
to
However, when using webpack and then importing via npm:
import openlayers from 'openlayers'
Gets you the production distribution of the library, the same as the ol.js script from above.
On a side note, to stop webpack trying to parse a prebuilt library and throw a warning about that you must include something like this:
// Squash OL whinging
webpackConfig.module.noParse = [
/\/dist\/ol.*\.js/, // openlayers is pre-built
]
Back to the problem at hand: how can I conditionally load a different entry-point for a module prebuilt and imported like this?
Of course, I can do it in a hacky way. By going into the node_modules/openlayers/package.json and switching the browser field from
"browser": "dist/ol.js",
to
"browser": "dist/ol-debug.js",
Is there a way I can request a different entry point via webpack or by using a different import syntax? Do I first have to petition the library maintainers to update the browser field to allow different entry point hints to browsers, according to the spec? https://github.com/defunctzombie/package-browser-field-spec
Thoughts on a more effective way to make this happen? Yearning to be able to programmatically switch loading of the production and debug versions of a library based on env variables.
Webpack has configuration options for replacing a module into a different path: https://webpack.github.io/docs/configuration.html#resolve-alias
This resolves the openlayers import to use the debug version:
webpackConfig.resolve.alias: {
openlayers: 'openlayers/dist/ol-debug.js'
}
In my build system I have a function that takes the environment type and returns the matching webpackConfig. Based on the parameter I include the above snippet or not.
Full code: webpack-multi-config.js
I have two different (gulp-) tasks for development and production. For example the production task: webpackProduction.js
Line 1 imports the config script with production as type.
My build system is based on gulp starter.
The Durandal Test Framework runs Jasmine tests within PhantomJS.
Where I'm implementing this for the first time I'm getting a lot of errors, and reading through these on the command prompt is proving to be very tedious.
If I load up the spec.html file in my web browser, it tells me that no specs were found:
Yet PhantomJS is able to find the specs with no problem:
Is there a way I can configure these Jasmine tests to run through my web browser and not through (or as well as) PhantomJS?
I've set up a new index.html file and have replaced the var runTests = ... section with a simple require() call:
require(['../test/specs/system.spec.js']);
Durandal's system.spec.js file is loaded in the browser, but Jasmine is still stating that no specs were found.
Jasmine's tests weren't being run because I wasn't telling it to re-execute. The solution to this was to simply re-execute by calling this within the require callback:
require(['../test/specs/system.spec.js'], function() {
jasmine.getEnv().execute();
});
Note: A drawback of this is that the 'no specs found' bar is still present and the 'raise exceptions' control on the re-executed specs doesn't appear to function:
We're evaluating noflo to be executed on an embedded linux box using an simple javascript engine, being an interpreter (no JIT). In our case, the Node.js engine (with embedded V8 engine) might be too resource intensive.
The immediate question is the how to run the noflo runtime in there. Checking out the GitHub repository (https://github.com/noflo/noflo) and using grunt, we have generated the noflo for the browser using grunt build:browser.
As simple example to actually try and run the generated browser/noflo.js file, I used the d8 shell (V8 engine shell) for an isolated Javascript engine outside the Node.js universe, and appended the following code to the noflo.js generated file:
var fbpData = "<some FBP language connections>";
var noflo = require('noflo');
noflo.graph.loadFbp(fbpData, function(graph) {
print("Graph loaded");
});
Then,
d8 noflo.js
on the Linux shell, which reports
rtm.js:9559: TypeError: undefined is not a function
noflo.graph.loadFbp(fbpData, function(graph) {
^
TypeError: undefined is not a function
at rtm.js:9559:13
Without knowing further, leads me to believe that the noflo.js is not self-contained with all core noflo runtime functionality.
What necessary steps are missing here, for me to get noflo library running in an isolated JS engine (V8 is just an example - it could be any engine the is ECMA V5 compliant)
All code examples on the noflo project web site are tailored for Node.js...
PS: I tried as an alternative to build a browser-based noflo from http://noflojs.org/download/, however this always returns "server error".
Best regards
Gunther Strube
The NoFlo-Gnome project contains a browser build of the noflo-runtime-base repository (https://github.com/noflo/noflo-runtime-base) which itself embeds NoFlo.
You might need to add some aliases because the browser build doesn't necessarily fit your engine : https://github.com/noflo/noflo-gnome/blob/master/src/noflo.js#L89
noflo-gnome runs NoFlo in GJS, which is based on Spidermonkey and GLib/GObject.
It has some minimal require() compatibility which allows pulling in NoFlo. There is a checked in build of noflo (+ noflo-runtime-base) in ./src/libs but I did not immediately find how this is created.
If you're considering using a browser build to speed up the startup time, you might also want to look at : https://github.com/djdeath/noflo-iot
At some point I tried to run NoFlo on a board with very slow I/O. It turned out that a single file compacted build of NoFlo (including all the needed components) was significantly faster.