I am using the following syntax in karate feature file and it works but I want to add this globally in karate config file so that I don't have to add in all of my feature file individually
* configure proxy = { uri: 'http://xx.xx.xxx.xx:8080', username: 'myuserid', password: 'xxxxxx' }
I need to know how we can add above globally in karate-config.js file
Thanks
The karate documentation is quite comprehensive.
If you have any questions, it's pretty likely to find the answer there or in a related demo .feature file.
From the documentation:
And if you need to set some of these 'globally' you can easily do so using the karate object in karate-config.js - for e.g. karate.configure('ssl', true).
So, I would try to put the following snippet in karate-config.js:
function() {
var config = {
BASE_URL: 'base url one,
BASE_URL2: 'base url two'
};
karate.configure('proxy', { uri: 'http://xx.xx.xxx.xx:8080', username: 'myuserid', password: 'xxxxxx' });
return config;
}
Needless to say, that you can use karate.env property to configure your proxy on the base of your environment.
Related
I was trying to find a way to launch all features in Karate testing through maven using an external variable to set up the browser (with a local webdriver or using a Selenium grid).
So something like:
mvn test -Dbrowser=chrome (or firefox, safari, etc)
or using a Selenium grid:
mvn test -Dbrowser=chrome (or firefox, safari, etc) -Dgrid="grid url"
With Cucumber and Java this was quite simple using a singleton for setting up a global webdriver that was then used in all tests. In this way I could run the tests with different local or remote webdrivers.
In Karate I tried different solution, the last was to:
define the Karate config file a variable "browser"
use the variable "browser" in a single feature "X" in which I set up only the Karate driver
from all the other features with callonce to re-call the feature "X" for using that driver
but it didn't work and to be honest it doesn't seem to me to be the right approach.
Probably being able to set the Karate driver from a Javascript function inside the features is the right way but I was not able to find a solution of that.
Another problem I found with karate is differentiating the behavior using a local or a remote webdriver as in the features files they're set in different ways.
So does anyone had my same needs and how can I solve it?
With the suggestions of Peter Thomas I used this karate-config.js
function fn() {
// browser settings, if not set it takes chrome
var browser = karate.properties['browser'] || 'chrome';
karate.log('the browser set is: ' + browser + ', default: "chrome"');
// grid flag, if not set it takes false. The grid url is in this format http://localhost:4444/wd/hub
var grid_url = karate.properties['grid_url'] || false;
karate.log('the grid url set is: ' + grid_url + ', default: false');
// configurations.
var config = {
host: 'http://httpstat.us/'
};
if (browser == 'chrome') {
if (!grid_url) {
karate.configure('driver', { type: 'chromedriver', executable: 'chromedriver' });
karate.log("Selected Chrome");
} else {
karate.configure('driver', { type: 'chromedriver', start: false, webDriverUrl: grid_url });
karate.log("Selected Chrome in grid");
}
} else if (browser == 'firefox') {
if (!grid_url) {
karate.configure('driver', { type: 'geckodriver', executable: 'geckodriver' });
karate.log("Selected Firefox");
} else {
karate.configure('driver', { type: 'geckodriver', start: false, webDriverUrl: grid_url });
karate.log("Selected Firefox in grid");
}
}
return config;
}
In this way I was able to call the the test suite specifying the browser to use directly from the command line (to be used in a Jenkins pipeline):
mvn clean test -Dbrowser=firefox -Dgrid_url=http://localhost:4444/wd/hub
Here are a couple of principles. Karate is responsible for starting the driver (the equivalent of the Selenium WebDriver). All you need to do is set up the configure driver as described here: https://github.com/intuit/karate/tree/master/karate-core#configure-driver
Finally, depending on your environment, just switch the driver config. This can easily be done in karate-config.js actually (globally) instead of in each feature file:
function fn() {
var config = {
baseUrl: 'https://qa.mycompany.com'
};
if (karate.env == 'chrome') {
karate.configure('driver', { type: 'chromedriver', start: false, webDriverUrl: 'http://somehost:9515/wd/hub' });
}
return config;
}
And on the command-line:
mvn test -Dkarate.env=chrome
I suggest you get familiar with Karate's configuration: https://github.com/intuit/karate#configuration - it actually ends up being simpler than typical Java / Maven projects.
Another way is to set variables in the karate-config.js and then use them in feature files.
* configure driver = { type: '#(myVariableFromConfig)' }
Keep these principles in mind:
Any driver instances created by a "top level" feature will be available to "called" features
You can even call a "common" feature, create the driver there, and it will be set in the "calling" feature
Any driver created will be closed when the "top level" feature ends
You don't need any other patterns.
EDIT: there's some more details in the documentation: https://github.com/intuit/karate/tree/develop/karate-core#code-reuse
And for parallel execution or trying to re-use a single browser for all tests, refer: https://stackoverflow.com/a/60387907/143475
I have a Vue.js CLI project working.
It accesses data via AJAX from localhost port 8080 served by Apache.
After I build the project and copy it to a folder served by Apache, it works fine and can access data via AJAX on that server.
However, during development, since the Vue.js CLI website is being served by Node.js which is serving on a different port (8081), I get a cross-site scripting error) and want to avoid cross-site scripting in general.
What is a way that I could emulate the data being provided, e.g. some kind of server script within the Vue.js-CLI project that would serve mock data on port 8081 for the AJAX calls during the development process, and thus avoid all cross-site scripting issues?
Addendum
In my config/index.js file, I added a proxyTable:
dev: {
env: require("./dev.env"),
port: 8081,
autoOpenBrowser: true,
assetsSubDirectory: "static",
assetsPublicPath: "/",
proxyTable: {
"/api": {
target: "http://localhost/data.php",
changeOrigin: true
}
},
And now I make my AJAX call like this:
axios({
method: 'post',
url: '/api',
data: {
smartTaskIdCode: 'activityReport',
yearMonth: '2017-09',
pathRewrite: {
"^/api": ""
}
}
But now I see in my JavaScript console:
Error: Request failed with status code 404
Addendum 2
Apparent axios has a problem with rerouting, so I tried it with vue-resource but this code is showing an error:
var data = {
smartTaskIdCode: 'pageActivityByMonth',
yearMonth: '2017-09'
}
this.$http.post('/api', data).then(response => {
this.pageStatus = 'displaying';
this.activity = response.data['activity'];
console.log(this.activity);
}, response => {
this.pageStatus = 'displaying';
console.log('there was an error');
});
The webpack template has its own documentation, and it has a chapter about API proxying during development:
http://vuejs-templates.github.io/webpack/proxy.html
If you use that, it means that you will request your data from the node server during development (and the node server will proxy< the request to your real backend), and the real backend directly in production, so you will have to use different hostnames in each environment.
For that, you can define an env variable in /config/dev.env.js & /config.prod.env.js
i try to run my first deepstream.io server from this link but i get this error :
error:
CONNECTION_ERROR | Error: listen EADDRINUSE 127.0.0.1:3003
PLUGIN_ERROR | connectionEndpoint wasn't initialised in time
f:\try\deep\node_modules\deepstream.io\src\utils\dependency-
initialiser.js:96
throw error
^
Error: connectionEndpoint wasn't initialised in time
at DependencyInitialiser._onTimeout
(f:\try\deep\node_modules\deepstream.io\src\utils\dependency-
initialiser.js:94:17)
at ontimeout (timers.js:386:14)
at tryOnTimeout (timers.js:250:5)
at Timer.listOnTimeout (timers.js:214:5)
and this is my code:
const DeepStreamServer = require("deepstream.io")
const C = DeepStreamServer.constants;
const server = new DeepStreamServer({
host:'localhost',
port:3003
})
server.start();
In deepstream 3.0 we released our HTTP endpoint, by default this runs alongside our websocket endpoint.
Because of this, passing the port option at the root level of the config no longer works (it overrides both the HTTP and websocket port options, as you can see in the screen capture provided, both endpoints are trying to start on the same port).
You can override each of these ports as follows:
const deepstream = require('deepstream.io')
const server = new deepstream({
connectionEndpoints: {
http: {
options: {
port: ...
}
},
websocket: {
options: {
port: ...
}
}
}
})
server.start()
Or you can define your config in a file and point to that while initialising deepstream[1].
[1] deepstream server configuration
One solution that i find is passing empty config object so inseted of :
const server = new DeepStreamServer({
host:'localhost',
port:3003
})
i'm just using this :
const server = new DeepStreamServer({})
and now everything work's well.
All the bellow is for Version 4.2.2 (last version by now)
I was having the same Port in use or config file not found errors. And i was using typescript and i didn't pay attention too to the output dir and build (which can be a problem when one use typescript and build). I was able to run the server in the end. And i had a lot of analysis.
I checked up the code source and i have seen how the config is loaded
const SUPPORTED_EXTENSIONS = ['.yml', '.yaml', '.json', '.js']
const DEFAULT_CONFIG_DIRS = [
path.join('.', 'conf', 'config'), path.join('..', 'conf', 'config'),
'/etc/deepstream/config', '/usr/local/etc/deepstream/config',
'/usr/local/etc/deepstream/conf/config',
]
DEFAULT_CONFIG_DIRS.push(path.join(process.argv[1], '..', 'conf', 'config'))
DEFAULT_CONFIG_DIRS.push(path.join(process.argv[1], '..', '..', 'conf', 'config'))
Also i tested different things and all. Here what i came up with:
First of all if we don't precise any parameter in the constructor. A config from the default directories will get to load. If there isn't then the server fail to run.
And one of the places where we can put a config is ./conf in the same folder as the server node script.
Secondly we can precise a config as a string path (parameter in the constructor). config.yml or one of the supported extensions. That will allow the server to load the server config + the permission.yml and users.yml configs. Which too are supposed to be in the same folder. If not in the same folder there load will fail, and therefor the permission plugin will not load. And so does the users config. And no fall back to default will happen.
Thirdly the supported extensions for the config files are: yml, yaml, json, js.
In nodejs context. If nothing precised. There is no fallback to some default config. The config need to be provided in one of the default folders, or by precising a path to it. Or by passing a config object. And all the optional options will default to some values if not provided ( a bit bellow there is an example that can show that ). Know however that precising an end point is very important and required.
To precise the path, we need to precise the path to the config.yml file (the server config) [example: path.join(__dirname, './conf/config.yml')]. Then from the same dir permission.yml and users.yml will be retrieved (the extension can be any of the supported one). We can not precise a path to a directory, it will fail.
We can precise the path to permission config or user config separatly within config.yaml as shown bellow:
# Permissioning example with default values for config-based permissioning
permission:
type: config
options:
path: ./permissions.yml
maxRuleIterations: 3
cacheEvacuationInterval: 60000
Finally we can pass an object to configure the server, or by passing null as a parameter and use .set methods (i didn't test the second method). For configuring the server we need to follow the same structure as the yml file. With sometimes a bit different naming. The typescript declaration files or types show us the way. With an editor like vscode. Even if we are not using typescript we can keep get the auto completion and type definitions.
And the simplest for equivalent to the previous version is :
const webSocketServer = new Deepstream({
connectionEndpoints: [
{
type: 'ws-websocket',
options: {
port: 6020,
host: '127.0.0.1',
urlPath: '/deepstream'
}
}
]
});
webSocketServer.start();
the above is the new syntax and way.
const server = new DeepStreamServer({
host:'localhost',
port:3003
})
^^^^^^^ is completely deprecated and not supported in version 4 (the doc is not updated).
I need some help to parameterize my test suites.
I want to create a Json file Suites.json and define suites in this file
module.exports = {
Suites:
Smoke: 'File1.spec.js','File2.spec.js',
Main: 'File1.spec.js','File2.spec.js','File3.spec.js'
}
Now i want to use this Json file in protractor.conf.js
I imported JSON file:
var SuiteFile = require('../Suites.json')
Now if i want to us in my actual Conf file, i am not sure how to use it.
Should i just say:
suites: SuitesFile
Can someone please confirm?
Yes, it is definitely possible to do something like this.
Please refer my blog post for more info
Step 1: Create a js file with suites
module.exports = {
suitesCollection: {
smoke: ['File1.spec.js','File2.spec.js',],
sanity: ['File1.spec.js','File2.spec.js','File3.spec.js'],
demo: ['demo.js']
}
}
Step 2: Import the js file and point the exports.config.suites to use the info from this file
var suitesFile = require('./suites.js');
exports.config = {
suites: suitesFile.suitesCollection,
UPDATE: In case there is a need to use Json feed for suites, please refer below
Step 1: Create a JSON file with Key-Value pairs of suites
{
"smoke": "demo.js,demo2.js",
"sanity": "demo2.js,demo.js,demo3.js",
"demo": "demo.js"
}
Step 2: Import the JSON and edit the config file accordingly. In case you want the suite names also generated, create a custom function to iterate the JSON and build suites
var suitesJson = require('./suites.json');
exports.config = {
suites: {
smoke: suitesJson.smoke.split(","),
sanity: suitesJson.sanity.split(","),
demo: suitesJson.demo.split(",")
},
OR In case you need to completely construct Suites object out of JSON (when you dont even know suite names)
Protractor Config File
var suitesJson = require('./suites.json');
var suitesAll = {}
for(var myKey in suitesJson) {
suitesAll[myKey] = suitesJson[myKey].split(",");
}
exports.config = {
suites: suitesAll,
While running grunt server for developing, How can I separately use the grunt qunit task to run the tests.
While trying to pass ["test/**/*.html"] to the all property, but that fails to run and returns (Warning: 0/0 assertions ran (0ms) Use)
It looks like, it doesn't fire off a phantomjs instance and doesn't find these pates.
So I tried the following
grunt.initConfig({
....
qunit: {
all: {
options: {
urls: ['http://localhost:<%= connect.options.port %>/test/tests/foo.html']
}
}
}
....
});
It only works if when manually include all test html pages (like in the example).
The problem is
My Question is, can grunt qUnit work properly even while using grunt server.
And how can i have ["test/**/*.html"] syntax work correctly. I am sure there must be a better way this should work!
Also, how can use grunt.file.expand be utilized to programmatically add matching files to run in the grunt qunit task.
I've done something like this:
grunt.initConfig({
...
'qunit': {
'files': ['test/**/*.html']
}
...
});
...
// Wrap the qunit task
grunt.renameTask('qunit', 'contrib-qunit');
grunt.registerTask('qunit', function(host, protocol) {
host = host || 'localhost';
protocol = protocol || 'http';
// Turn qunit.files into urls for conrib-qunit
var urls = grunt.util._.map(grunt.file.expand(grunt.config.get('qunit.files')), function(file) {
return protocol + '://' + host + '/' + file;
});
var config = { 'options': { 'urls' : urls } };
grunt.config.set('contrib-qunit.all', config);
grunt.task.run('contrib-qunit');
});