How to use StaticLiveServerTestCase with different domains? - django-testing

I am using selenium for functional tests with geckodriver and firefox.
I see the host is http://localhost:62305 and this is generated in that class with:
#classproperty
def live_server_url(cls):
return 'http://%s:%s' % (cls.host, cls.server_thread.port)
In the functionality I am creating, I give the user a way to create a tenant with its own subdomain but for the purpose of the unit test it can be a domain.
So for example I create a tenant with domain example.com, how can I get
StaticLiveServerTestCase to point that domain name to the currently running app on that same port, within the same functional test method.
I looked at this post suggesting editing /etc/hosts. The problem with that is it won't just work on CI and other people's computers.

You can do this.
class SeleniumTests(StaticLiveServerTestCase):
#classmethod
def setUpClass(cls):
cls.host = 'host.com'
cls.port = 3000
super(SeleniumTests, cls).setUpClass()
....
in your test class and once you do this for instance
self.selenium.get('%s%s' % (self.live_server_url), 'some_url')
your browser gonna be launch with the right URL like this
http://host.com:3000/some_url/,
notice how the super call is made after setup the port and host.

Related

Testing site with IP addr whitelist using BrowserStack automate + cloud hosted CI

I have a test system (various web pages / web applications), that is hosted in an environment accessible only via machines with IP addresses that are white listed. I control the white list.
Our CI system is cloud hosted (Gitlab), so VMs are spun up dynamically as needed to run automated integration tests as a part of the build pipeline.
The tests in question use BrowserStack automation to run Selenium based tests, which means the source IP addresses of the BrowserStack automation driven requests that hit the test environment are dynamic, as BS is cloud hosted. Also the IP addresses of our test runner machines that call / invoke the BrowserStack automation are dynamic as well.
The whole system worked fine before the intro of IP white listing on the test environment. Since white listing was enabled, the BrowserStack tests can no longer access the environment URLs (due to not being able to white list the dynamic IPs).
I have been trying to get the CI driven tests working again using BS "Local Testing" feature, outlined here https://www.browserstack.com/local-testing.
I have set-up a dedicated Linux VM with a static IP address (cloud hosted). I have installed and am running the BrowserStackLocal.exe binary, using our BS key. It starts up fine and says it has connected to BrowserStack via a web socket. My understanding is this should cause all http(s) etc requests that come from my CI / BrowserStack automation driven tests to be routed through that stand-alone machine (via BS cloud), resulting in it's static IP address being the source of the requests seen at the test environment. This IP addr is white listed.
This is the command that is running on the dedicated / static IP machine:
BrowserStackLocal.exe --{access key} --verbose 3
I have also tried the below, but it made no apparent difference:
BrowserStackLocal.exe --{access key} --force-local --verbose 3
However, this does not seem to work? Either through "live" testing if I try and access the test env directly through BrowserStack, or through BS automate. In both cases the http(s) requests all time out and cannot access our test environment URLs. Also even with --verbose 3 logging level enabled on the BrowserStackLocal.exe process, I never see any request being logged on the stand-alone / static IP machine when I try to run the tests in various ways.
So I am wondering if this is the correct way to solve this problem? Am I misunderstanding how to do this? Do I need to run the BrowserStackLocal.exe perhaps on the same CI runner machine that is invoking the BS automation? This would be problematic as these have dynamic IPs as well (currently).
Thanks in advance for any help!
EDIT/UPDATE: I managed to get this to work!! (Sort of) - it's just a bit slow. If I run the following command on my existing dedicated / static IP server:
BrowserStackLocal.exe --key {mykey} --force-local --verbose 3
Then on another machine (like my dev laptop) if I hit the BS web driver server http://hub-cloud.browserstack.com/wd/hub, and access the site http://www.whatsmyip.org/ to see what IP address comes back, and it did (eventually) come back with my static IP machines address! The problem though is it was quite slow - 20-30 secs for that one site hit, so still looking at alternative solutions. Note for this to work your test code must set the "local" browserstack capability flag to 'true' - eg for Node.js:
// Input capabilities
var capabilities = {
'browserstack.local' : 'true'
}
UPDATE 2: Turning down the --verbose logging level on the local binary (or leaving that flag off completely) seemed to improve things - I am getting 5-10 sec response times now for each request. That might have to do. But this does work as described.
SOLUTION: I managed to get this to work - it's just a bit slow. If I run the following command on my existing dedicated / static IP server (note adding verbose logging seems to slow things down more, so no --verbose flag used now):
BrowserStackLocal.exe --key {mykey} --force-local
Then on another machine (like my dev laptop) if I hit the BS web driver server http://hub-cloud.browserstack.com/wd/hub, and access the site http://www.whatsmyip.org/ to see what IP address comes back, and it did come back with my static IP machines address. Note for this to work your test code must set the "local" browserstack capability flag to 'true' - eg for Node.js:
// Input capabilities
var capabilities = {
'browserstack.local' : 'true'
}
So while a little slow, that might have to do. But this does work as described.

Spider returns different results from local machine and Scrapy Cloud(phantomjs+selenium+crawlera)

Hello!
Question to one who use scrapinghub, shub-image, selenuim+phantomjs, crawlera.
English skill is not good, sorry
I needed to scrape site which have many JS code. So I use scrapy+selenium. Aslo it should run at Scrapy Cloud.
I've writtŠµn spider which uses scrapy+selenuim+phantomjs and run it on my local machine. All is ok.
Then I deployed project to Scrapy cloud using shub-image. Deployment is ok. But results of
webdriver.page_source is different. It's ok on local, not ok(HTML with inscription - 403, request 200 http) at cloud.
Then I decided to use crawlera acc. I've added it with:
service_args = [
'--proxy="proxy.crawlera.com:8010"',
'--proxy-type=https',
'--proxy-auth="apikey"',
]
for Windows(local)
self.driver = webdriver.PhantomJS(executable_path=r'D:\programms\phantomjs-2.1.1-windows\bin\phantomjs.exe',service_args=service_args)
for docker instance
self.driver = webdriver.PhantomJS(executable_path=r'/usr/bin/phantomjs', service_args=service_args, desired_capabilities=dcap)
Again at local all is ok. Cloud not ok.
I've checked cralwera info. It's ok. Requests sends from both(local and cloud).
Note again:
Same proxies(crawlera).
response at windows:
200 http, html with right code
response at ScrapyCloud(docker instance):
200 http, html with inscription 403(forbidden)
I dont get what's wrong.
I think it might be differences between phantomjs versions(Windows, Linux).
Any ideas?

Browserstack&Selenium - proxy configuration

I am trying to setup the browserstack and the local testing.
I opened a tunnel using the browserstack local client like below:
./BrowserStackLocal.exe myCodeToken -proxyHost MY_PROXY_IP -proxyPort MY_PROXY_PORT -v -force -forcelocal
So it is forwardin all the trafic through my local network.
In my local /etc/hosts file I have entry like below:
127.0.0.1 dev.mysite.com
Then when I'm executing my simple selenium test the browserstack virtualmachine are able to access my domain name and everything works fine. But in some sections where I have external scripts/css/images etc. for example gooogle analitics browserstack doesn't have access to them (and to the internet as well).
The thing is that my local machine uses the MY_PROXY_IP:MY_PROXY_PORT to access the internet and it looks like the browserstack tries to forward all the trafic through my machine and it doesn't work.
Do you have any ideas how to resolve this ?
You can try removing the -forcelocal parameter, as that is responsible for routing all traffic via your machine. On doing this, any public (external) css/images would be resolved directly and not via your machine.
Alternatively, if the -forcelocal parameter is necessary, you can try using it with -only parameter. In this case, the requests for domain(s) mentioned under -only, would be routed via your machine whereas rest of the requests would be resolved publicly.
You can execute the Local Testing binaries as follows:
BrowserStackLocal.exe <automate-key> <the-proxy-parameters> -forcelocal -only host_name,port_no,ssl_flag
OR
BrowserStackLocal.exe <automate-key> <the-proxy-parameters> -forcelocal -only host1,port1,ssl_flag,host2,port2,ssl_flag (For multiple hosts)
More details on different parameters that can be used while setting up the Local Testing connection available here - https://www.browserstack.com/local-testing#modifiers

How to restrict access to Selenium Standalone Server instance?

I have an instance of Selenium Standalone server in a virtual Widows box to run my tests on. It's being started in the following way:
java -jar selenium-server-standalone-2.46.0.jar -D"webdriver.chrome.driver"=chromedriver_2.13.exe -D"webdriver.ie.driver"=IEDriverServer_2.44.exe
Today I've noticed in the output some unexpected lines. The tests are running in the night and after that I see following:
08:46:20.197 INFO - Couldn't proxy to http://www.cv7.waw.pl/108258/Dachy/artykul.html because host not found
11:12:07.873 INFO - Couldn't proxy to http://g1nkaku.bieszczady.pl/damy-rade-zespol-na-wesele-bydgoszcz because host not found
11:49:49.204 INFO - Couldn't proxy to http://www.swiat.opt.waw.pl/Kryszyn/planeta-102-7/ because host not found
None of my tests were accessing any of such links, especially at these timestamps, and I have no idea where it can come from. My assumption is that someone found out the link to this instance of Selenium server and was sending requests through it.
What are my options to restrict access to Selenium server? Are there any options to request some custom login/password for all clients of this server? It's being used by several people in our team from multiple locations, so IP based checks are not an option.

Grails generating proper links when deployed behind proxy

Consider the following setup for a deployed Grails application.
the Grails application is deployed on a Tomcat server (tomcat7)
in front of Tomcat an Apache webserver is deployed
Apache does SSL offloading, and acts as a proxy for Tomcat
So far a quite standard setup, which I have used succesfully many times. My issue is now with the links generated by the Grails application, especially those for the redirects (the standard controller redirects, which occur all the time e.g. after succesfully posting a form).
One configuration is different from all the other applications so far: in this Grails application no serverURL is configured. The application is a multi-tenant application, where each tenant is given it's own subdomain. (So if the application in general is running under https://www.example.com, a tenant can use https://tenant.example.com.) Subdomains are set automagically, that is without any configuration at DNS or Apache level. Grails can do so perfectly, by leaving out the serverURL property in Config.groovy: it then resolves all url's by inspecting the client host.
However: when generating redirect-url's, Grails is not aware the website is running under https. All redirect url's start with http... I guess this is no surprise, because nowhere in the application it is configured we are using https: there is no serverURL config, and technically the application is running on the standard http port of Tomcat, because of the SSL offloading and proxying by Apache.
So, bottom line: what can I do to make Grails generate proper redirects?
I have tried to extend the DefaultLinkGenerator and override the makeServerURL() method. Like this:
class MyLinkGenerator extends DefaultLinkGenerator {
MyLinkGenerator(String serverBaseURL, String contextPath) {
super(serverBaseURL, contextPath)
}
MyLinkGenerator(String serverBaseURL) {
super(serverBaseURL)
}
def grailsApplication
/**
* #return serverURL adapted to deployed scheme
*/
String makeServerURL() {
// get configured protocol
def scheme = grailsApplication.config.grails.twt.baseProtocol ?: 'https://'
println "Application running under protocol $scheme"
// get url from super
String surl = super.makeServerURL()
println "> super.makeServerURL(): $surl"
if (surl) {
// if super url not matching scheme, change scheme
if (scheme=='https://' && surl?.startsWith('http://')) {
surl = scheme + surl?.substring(7)
println "> re-written: $surl"
}
}
return surl
}
}
(Maybe not the most beautiful code, but I hope it explains what I'd like to do. And I left out the bit about configuring this class in resources.groovy.)
When running this code strange things happen:
In the log you see the code being executed, and a changed url (http > https) being produced, but...
The redirect sent to the browser is the unchanged url (http)
And even worse: all the resources in the generated views are crippled: they now all start with // (so what should be a relative "/css/myapp.css" is now "//css/myapp.css")
Any help or insight would be appreciated!
Grails version is 2.1.1 by the way (running a bit behind on upgrades...).
It seems you're always talking https to the outside world, so your cleanest option is to solve the problem where it originates, at your Apache webserver. Add to httpd.conf Header edit Location ^http://(.*)$ https://$1, and you're done.
If you have limitations that force you to solve this in your application you could do the rewrite of the Location header field in a Grails after interceptor. Details of that solution are in this post.
Some years have past since this question was written, but problems remain the same or at least similar ;-)
Just in case anyone hits the same/similar issue (that Grails redirect-URLs are http instead of https) ... We had this issue with a Grails 3.3.9 application running on OpenShift. The application was running in HTTP mode (on Port 8080) and the OpenShift Loadbalancer was doing the SSL-Termination.
We solved it by putting
server:
use-forward-headers: true
into our application.yml. After that Grails works perfect and all the redirects created were correct with https://.
Hint: We have do not use grails.serverURL in our configuration