TestCafe EC2 Network logs - testing

We are "successfully" running our gherkin-testcafe build on ec2 headless against chromium. The final issue we are dealing with is that at a certain point in the test a CTA button is showing ...loading instead of Add to Bag, presumably because a service call that gets the status of the product, out of stock, in stock, no longer carry, etc. is failing. The tests work locally of course and we have the luxury of debugging locally opening chrome's dev env and inspecting the network calls etc. But all we can do on the ec2 is take a video and see where it fails. Is there a way to view the logs of all the calls being made by testcafe's proxy browser so we can confirm which one is failing and why? We are using. const rlogger = RequestLogger(/.*/, {
logRequestHeaders: true,
logResponseHeaders: true
});
to log our headers but not getting very explicit reasons why calls are not working.

TestCafe uses the debug module to perform internal logging functionality. So, in order to view the TestCafe proxy logs, you can set the DEBUG environment variable in the following manner:
export DEBUG='hammerhead:*'

Related

How to locally run my cloudflare worker serverless function, during development?

I managed to deploy my first cloudflare worker using serverless framework according to
https://serverless.com/framework/docs/providers/cloudflare/guide/
and it is working when I hit the cloud.
During development, would like to be able to test on http://localhost:8080/*
What is the simplest way to bring up a local http server and handle my requests using function specified in serverless.yml?
I looked into https://github.com/serverless/examples/tree/master/google-node-simple-http-endpoint
but there is no "start" script.
There seem to be no examples for cloudflare on https://github.com/serverless/
At present, there is no way to run the real Cloudflare Workers runtime locally. The Workers team knows that developers need this, but it will take some work to separate the core Workers runtime from the rest of Cloudflare's software stack, which is otherwise too complex to run locally.
In the meantime, there are a couple options you can try instead:
Third-party emulator
Cloudworker is an emulator for Cloudflare Workers that runs locally on top of node.js. It was built by engineers at Dollar Shave Club, a company that uses Workers, not by Cloudflare. Since it's an entire independent implementation of the Workers environment, there are likely to be small differences between how it behaves vs. the "real thing". However, it's good enough to get some work done.
Preview Service API
The preview seen on cloudflareworkers.com can be accessed via API. With some curl commands, you can upload your code to cloudflareworkers.com and run tests on it. This isn't really "local", but if you're always connected to the internet anyway, it's almost the same. You don't need any special credentials to use this API, so you can write some scripts that use it to run unit tests, etc.
Upload a script called worker.js by POSTing it to https://cloudflareworkers.com/script:
SCRIPT_ID=$(curl -sX POST https://cloudflareworkers.com/script \
-H "Content-Type: text/javascript" --data-binary #worker.js | \
jq -r .id)
Now $SCRIPT_ID will be a 32-digit hex number identifying your script. Note that the ID is based on a hash, so if you upload the exact same script twice, you get the same ID.
Next, generate a random session ID (32 hex digits):
SESSION_ID=$(head -c 16 /dev/urandom | xxd -p)
It's important that this session ID be cryptographically random, because anyone with the ID will be able to connect devtools to your preview and debug it.
Let's also define two pieces of configuration:
PREVIEW_HOST=example.com
HTTPS=1
These specify that when your worker runs, the preview should act like it is running on https://example.com. The URL and Host header of incoming requests will be rewritten to this protocol and hostname. Set HTTPS=1 if the URLs should be HTTPS, or HTTPS=0 if not.
Now you can send a request to your worker like:
curl https://00000000000000000000000000000000.cloudflareworkers.com \
-H "Cookie: __ew_fiddle_preview=$SCRIPT_ID$SESSION_ID$HTTPS$PREVIEW_HOST"
(The 32 zeros can be any hex digits. When using the preview in the browser, these are randomly-generated to prevent cookies and cached content from interfering across sessions. When using curl, though, this doesn't matter, so all-zero is fine.)
You can change this curl line to include a path in the URL, use a different method (like -X POST), add headers, etc. As long as the hostname and cookie are as shown, it will go to your preview worker.
Finally, you can connect the devtools console for debugging in Chrome (currently only works in Chrome unfortunately):
google-chrome https://cloudflareworkers.com/devtools/inspector.html?wss=cloudflareworkers.com/inspect/$SESSION_ID&v8only=true
Note that the above API is not officially documented at present and could change in the future, but changes should be relatively easy to figure out by opening cloudflareworkers.com in a browser and looking at the requests it makes.
You may also be able to test locally by loading the Cloudflare worker as a service worker.
Note:
Use a local web server with https:. Workers won't load using file: or http: protocols.
Your browser will need to support workers, so you can't use IE.
Mock any Cloudflare-specific features, such as KV.
<!doctype html>
<html>
<head>
<meta charset="utf-8">
</head>
<body>
<!-- Service worker registration -->
<script>
if ('serviceWorker' in navigator) {
// Register the ServiceWorker
navigator.serviceWorker.register('/service-worker.js')
.then(
function(reg) {
// Registration succeeded
console.log('[registerServiceWorker] Registration succeeded. Scope is ' + reg.scope)
window.location.reload(true)
})
.catch(
function(error) {
// Registration failed
console.log('[registerServiceWorker] Registration failed with ' + error)
})
} else {
console.log('[registerServiceWorker] Service workers aren\'t supported')
}
</script>
</body>
</html>
Dollar Share Club created Cloudworker. It is not actively maintained, but it is a way to run Cloudflare Workers locally.
You can read about it on the Cloudflare blog in guest post by the original maintainer of Cloudworker.

Testing site with IP addr whitelist using BrowserStack automate + cloud hosted CI

I have a test system (various web pages / web applications), that is hosted in an environment accessible only via machines with IP addresses that are white listed. I control the white list.
Our CI system is cloud hosted (Gitlab), so VMs are spun up dynamically as needed to run automated integration tests as a part of the build pipeline.
The tests in question use BrowserStack automation to run Selenium based tests, which means the source IP addresses of the BrowserStack automation driven requests that hit the test environment are dynamic, as BS is cloud hosted. Also the IP addresses of our test runner machines that call / invoke the BrowserStack automation are dynamic as well.
The whole system worked fine before the intro of IP white listing on the test environment. Since white listing was enabled, the BrowserStack tests can no longer access the environment URLs (due to not being able to white list the dynamic IPs).
I have been trying to get the CI driven tests working again using BS "Local Testing" feature, outlined here https://www.browserstack.com/local-testing.
I have set-up a dedicated Linux VM with a static IP address (cloud hosted). I have installed and am running the BrowserStackLocal.exe binary, using our BS key. It starts up fine and says it has connected to BrowserStack via a web socket. My understanding is this should cause all http(s) etc requests that come from my CI / BrowserStack automation driven tests to be routed through that stand-alone machine (via BS cloud), resulting in it's static IP address being the source of the requests seen at the test environment. This IP addr is white listed.
This is the command that is running on the dedicated / static IP machine:
BrowserStackLocal.exe --{access key} --verbose 3
I have also tried the below, but it made no apparent difference:
BrowserStackLocal.exe --{access key} --force-local --verbose 3
However, this does not seem to work? Either through "live" testing if I try and access the test env directly through BrowserStack, or through BS automate. In both cases the http(s) requests all time out and cannot access our test environment URLs. Also even with --verbose 3 logging level enabled on the BrowserStackLocal.exe process, I never see any request being logged on the stand-alone / static IP machine when I try to run the tests in various ways.
So I am wondering if this is the correct way to solve this problem? Am I misunderstanding how to do this? Do I need to run the BrowserStackLocal.exe perhaps on the same CI runner machine that is invoking the BS automation? This would be problematic as these have dynamic IPs as well (currently).
Thanks in advance for any help!
EDIT/UPDATE: I managed to get this to work!! (Sort of) - it's just a bit slow. If I run the following command on my existing dedicated / static IP server:
BrowserStackLocal.exe --key {mykey} --force-local --verbose 3
Then on another machine (like my dev laptop) if I hit the BS web driver server http://hub-cloud.browserstack.com/wd/hub, and access the site http://www.whatsmyip.org/ to see what IP address comes back, and it did (eventually) come back with my static IP machines address! The problem though is it was quite slow - 20-30 secs for that one site hit, so still looking at alternative solutions. Note for this to work your test code must set the "local" browserstack capability flag to 'true' - eg for Node.js:
// Input capabilities
var capabilities = {
'browserstack.local' : 'true'
}
UPDATE 2: Turning down the --verbose logging level on the local binary (or leaving that flag off completely) seemed to improve things - I am getting 5-10 sec response times now for each request. That might have to do. But this does work as described.
SOLUTION: I managed to get this to work - it's just a bit slow. If I run the following command on my existing dedicated / static IP server (note adding verbose logging seems to slow things down more, so no --verbose flag used now):
BrowserStackLocal.exe --key {mykey} --force-local
Then on another machine (like my dev laptop) if I hit the BS web driver server http://hub-cloud.browserstack.com/wd/hub, and access the site http://www.whatsmyip.org/ to see what IP address comes back, and it did come back with my static IP machines address. Note for this to work your test code must set the "local" browserstack capability flag to 'true' - eg for Node.js:
// Input capabilities
var capabilities = {
'browserstack.local' : 'true'
}
So while a little slow, that might have to do. But this does work as described.

How can I replace the server in Web Component Tester

I have a project set up based around the Polymer Starter Kit, which includes Web-Component-Tester
This project includes php server code which I would also like to test by writing tests to run in the browser which will utilise the PHP server code through Ajax Calls.
This implies replacing the server that Web Component Tester is using ONLY when testing server side code. I hope to make a separate gulp task for this.
Unfortunately, I don't understand the relationship between WCT, Selenium and what ever server is run currently. I can see that WCT command starts Selenium, but I can't find out what the web server is and how that is started. I suspect it is WCT, because there is configuration of the mapping of directories to urls, but other than that I haven't a clue, despite trying to read the code.
Can someone explain how I go about making it run its own server when testing the client, but relying on an already set up web server (nginx) when running the server. I can set nginx to run from local host, or an other domain if that is a way to choose a different configuration.
EDIT: I have now found that runner/webserver.js starts an express server, and that urls get mapped so the base directory for the test runner and the bower_components directory both get mapped to the /components url.
What is currently confusing me is in what circumstances this gets run. It appears that loading plugins somehow does it, but my understanding from reading the code for this is tenuous.
The answer is that web component tester itself has a comment in the runner/config.js file.
In wct-conf.js, you can use registerHooks key into the Object that gets returned to add a function that does
registerHooks: function(wct) {
wct.hook('prepare:webserver', function(app, done) {
var proxy = require('express-http-proxy');
app.use('/api',
proxy('pas.dev', {
forwardPath: function(req, res) {
return require('url').parse(req.url).path;
}
})
);
done();
});
This register hook function allows you to provide a route (/api in my case) which this proxies to a server which can run the php scripts.

Webdriver(Selenium2) - How to make selenium operate elements without wating for connecting to external AD links?

Environment:
- Selenium 2.39 Standalone Server
- PHP 5.4.11
- PHPUnit 3.7.28
- Chrome V31 & ChromeDriver v2.7
I'm testing a website,which invokes a lot of Advertisement Systems,such as Google AD.
The browser takes a lot of time to connect to external AD links , even all the elements of the page has already been loaded.
If my internet network was not fast when I ran my tests on a webpage,
Selenium would wait for a very long time ,since the AD links responsed slowly.
Under this condition ,Selenium usually waits for over 60 seconds, and throws a timeout exception.
I'm not sure how Senelium works, but it seems that Selenium has to wait for a sign of webpage's full loading, then pulls the DOM to find elements.
I want to make selenium operate elements without waiting for connectiong to external AD links.
Is there a way to do that ? Thank you very much.
I would suggest that you could make use of a proxy. Browsermob integrates well with selenium, very easy to use it:
// start the proxy
ProxyServer server = new ProxyServer(4444);
server.start();
// get the Selenium proxy object
Proxy proxy = server.seleniumProxy();
// This line will automatically return http.200 for any request going to google analytics
server.blacklistRequests("https?://.*\\.google-analytics\\.com/.*", 200);
// configure it as a desired capability
DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.setCapability(CapabilityType.PROXY, proxy);
// start the browser up
WebDriver driver = new FirefoxDriver(capabilities);
I'm not sure how Senelium works, but it seems that Selenium has to
wait for a sign of webpage's full loading, then pulls the DOM to find
elements.
It is pretty much like this. The default loading strategy is "NORMAL" which means:
NORMAL of type DOMString
The remote end MUST wait until the "document.readyState" of the frame currently handling commands equals "complete", or there are no
more outstanding network requests other than XMLHttpRequests.
I finally found a simple solution for my condition.
I decide to block these Ad requests and tried some firewall and proxy softwares,for example,
comodo,privatefirewall, etc.
comodo is too heavy and complex ,privatefirewall doesn't support wildcards, and firewall would interrupt tests. At last I choosed a proxy software CCproxy. Trial Version is enough.
I create a rule for localhost ,to make it can request my test website domain only, and all other requests are rejected.
Running a test costs about 1-2 minutes before and only 30 seconds now ,it's apparently more stable and fast without connecting to the useless Ad links.
Here're configuration steps:
1.launch CCproxy with Administor privilege( you should set it using Adminisrator in the file property)
2.click Options, select AutoStartup,select AutoDetected for Local IP Address. click OK.
3.create a txt file ,input your domains,like " *.rong360.com*;*.rong360.*; "
4.click Account, select PermitOnly for Permit Category;
click New, input 127.0.0.1 for IP Address/Range;
select WebFilter,click the E button at right side to create a filter;
click the ... button,select the text file you create at Step3,
select PermittedSites. click OK
click OK.
5.click OK to return to the main UI of CCproxy.
6.launch IE and config the local proxy with 127.0.0.1:808
other browsers will use this config automatically too.
now you can run the tests again , you'll feel better if have same condition :)

how to handle cross-domain testing in selenium

How to handle cross-domain functionality in selenium.can any one explain me plz?
For ex: need to open Google.com and gmail, Using same selenium session object, I was seeing permission denied error , i tried with *iehta, Proxy injection mode as well it didn't work can you help me out..
I found this anwer on stackexchange.com:
You should be able to do so while using browsers with elevated
security privileges like *chrome for firefox. So you could just do
selenium.open("newURL");
in your test. Problem of changing URL is, it change in domain and
normal Selenium browser mode is restricted by Java Script's Same
Origin Policy, as I mentioned above browsers with elevated security
privileges should get you going.
I suppose this is the point where you are trying to load another URL
in same selenium session -
sel.open("www.google.com");
sel.waitForPageToLoad(stimeout);
First - don't use waitForPageToLoad, open api takes care of it.
Now if sel.open does not work then you should definitely encounter
error. Don't keep you method in try catch block and see the error you
encounter....
1:
https://sqa.stackexchange.com/questions/761/can-the-base-url-be-changed-in-the-same-browser-session-using-selenium-rc
If can't open two different domains with one selenium object, try using a different object for each domain (e.g. an object called seleniumGoogle and an object calledseleniumGmail).