Rails 3: how to use testing database in RSpec/Selenium,etc? - ruby-on-rails-3

I have many tests like creating user, updating, etc. In some of the controllers I have access to a Mongo Database. The problem is that it also accesses it when doing the tests, adding data to the database.
Is there a way to block access to the test suite to that code? It becomes annoying every time I run the tests I get more 100 rows.
Thanks

Are you defining access to the mongo database in your database.yml? If so, set up a connection for the test environment:
development: &default_settings
database: APPNAME_development
host: 127.0.0.1
port: 27017
test:
<<: *default_settings
database: APPNAME_test
If you are accessing the mongo database through some sort of web service API, then you can use a combination of fakeweb and VCR to record the requests and responses to it. Subsequent requests from your tests to the service will serve up the cached response rather than hitting it directly.
https://github.com/myronmarston/vcr

Related

Testing site with IP addr whitelist using BrowserStack automate + cloud hosted CI

I have a test system (various web pages / web applications), that is hosted in an environment accessible only via machines with IP addresses that are white listed. I control the white list.
Our CI system is cloud hosted (Gitlab), so VMs are spun up dynamically as needed to run automated integration tests as a part of the build pipeline.
The tests in question use BrowserStack automation to run Selenium based tests, which means the source IP addresses of the BrowserStack automation driven requests that hit the test environment are dynamic, as BS is cloud hosted. Also the IP addresses of our test runner machines that call / invoke the BrowserStack automation are dynamic as well.
The whole system worked fine before the intro of IP white listing on the test environment. Since white listing was enabled, the BrowserStack tests can no longer access the environment URLs (due to not being able to white list the dynamic IPs).
I have been trying to get the CI driven tests working again using BS "Local Testing" feature, outlined here https://www.browserstack.com/local-testing.
I have set-up a dedicated Linux VM with a static IP address (cloud hosted). I have installed and am running the BrowserStackLocal.exe binary, using our BS key. It starts up fine and says it has connected to BrowserStack via a web socket. My understanding is this should cause all http(s) etc requests that come from my CI / BrowserStack automation driven tests to be routed through that stand-alone machine (via BS cloud), resulting in it's static IP address being the source of the requests seen at the test environment. This IP addr is white listed.
This is the command that is running on the dedicated / static IP machine:
BrowserStackLocal.exe --{access key} --verbose 3
I have also tried the below, but it made no apparent difference:
BrowserStackLocal.exe --{access key} --force-local --verbose 3
However, this does not seem to work? Either through "live" testing if I try and access the test env directly through BrowserStack, or through BS automate. In both cases the http(s) requests all time out and cannot access our test environment URLs. Also even with --verbose 3 logging level enabled on the BrowserStackLocal.exe process, I never see any request being logged on the stand-alone / static IP machine when I try to run the tests in various ways.
So I am wondering if this is the correct way to solve this problem? Am I misunderstanding how to do this? Do I need to run the BrowserStackLocal.exe perhaps on the same CI runner machine that is invoking the BS automation? This would be problematic as these have dynamic IPs as well (currently).
Thanks in advance for any help!
EDIT/UPDATE: I managed to get this to work!! (Sort of) - it's just a bit slow. If I run the following command on my existing dedicated / static IP server:
BrowserStackLocal.exe --key {mykey} --force-local --verbose 3
Then on another machine (like my dev laptop) if I hit the BS web driver server http://hub-cloud.browserstack.com/wd/hub, and access the site http://www.whatsmyip.org/ to see what IP address comes back, and it did (eventually) come back with my static IP machines address! The problem though is it was quite slow - 20-30 secs for that one site hit, so still looking at alternative solutions. Note for this to work your test code must set the "local" browserstack capability flag to 'true' - eg for Node.js:
// Input capabilities
var capabilities = {
'browserstack.local' : 'true'
}
UPDATE 2: Turning down the --verbose logging level on the local binary (or leaving that flag off completely) seemed to improve things - I am getting 5-10 sec response times now for each request. That might have to do. But this does work as described.
SOLUTION: I managed to get this to work - it's just a bit slow. If I run the following command on my existing dedicated / static IP server (note adding verbose logging seems to slow things down more, so no --verbose flag used now):
BrowserStackLocal.exe --key {mykey} --force-local
Then on another machine (like my dev laptop) if I hit the BS web driver server http://hub-cloud.browserstack.com/wd/hub, and access the site http://www.whatsmyip.org/ to see what IP address comes back, and it did come back with my static IP machines address. Note for this to work your test code must set the "local" browserstack capability flag to 'true' - eg for Node.js:
// Input capabilities
var capabilities = {
'browserstack.local' : 'true'
}
So while a little slow, that might have to do. But this does work as described.

What is proper way for fixtures while using chimp with Meteor

I'm playing with chimp testing tool. At the moment I can easily run cucumber and mocha tests. The problem is that I don't know how to add DB fixtures. I'd like to have initial data before running some tests (e.g. add test user into system).
BTW that data can be added only by authenticated user and users can be create only by admin or from server level.
Can't find any docs about this for now. Any suggestions?
If you are using Meteor, you can pass the DDP parameter on the command line --DDP=http://localhost:3000 and then use server.execute to run code on the server. This code can then setup data.
If you are not using Meteor, you can use a HTTP call using request.get('http://localhost:8080/addUser').
Through HTTP / DDP you can access the server and create a testing backdoor to setup the data you need.

How to restrict access to Selenium Standalone Server instance?

I have an instance of Selenium Standalone server in a virtual Widows box to run my tests on. It's being started in the following way:
java -jar selenium-server-standalone-2.46.0.jar -D"webdriver.chrome.driver"=chromedriver_2.13.exe -D"webdriver.ie.driver"=IEDriverServer_2.44.exe
Today I've noticed in the output some unexpected lines. The tests are running in the night and after that I see following:
08:46:20.197 INFO - Couldn't proxy to http://www.cv7.waw.pl/108258/Dachy/artykul.html because host not found
11:12:07.873 INFO - Couldn't proxy to http://g1nkaku.bieszczady.pl/damy-rade-zespol-na-wesele-bydgoszcz because host not found
11:49:49.204 INFO - Couldn't proxy to http://www.swiat.opt.waw.pl/Kryszyn/planeta-102-7/ because host not found
None of my tests were accessing any of such links, especially at these timestamps, and I have no idea where it can come from. My assumption is that someone found out the link to this instance of Selenium server and was sending requests through it.
What are my options to restrict access to Selenium server? Are there any options to request some custom login/password for all clients of this server? It's being used by several people in our team from multiple locations, so IP based checks are not an option.

Rails - MongoDB replica set issue

I was doing the failover testing of mongodb on my local environment. I have two mongo servers(hostname1, hostname2) and an arbiter.
I have the following configuration in my mongoid.yml file
localhost:
hosts:
- - hostname1
- 27017
- - hostname2
- 27017
database: myApp_development
read: :primary
use_activesupport_time_zone: true
Now when I start my rails application, everything works fine, and the data is read from the primary(hostname1). Then I kill the mongo process of the primary(hostname1), so the secondary(hostname2) becomes the primary and starts serving the data.
Then after some time I start the mongo process of hostname1 then it becomes the secondary in the replica set.
Now the primary(hostname2) and secondary(hostname1) are working all right.
The real problem starts here.
I kill the mongo process of my new primary(hostname2), but this time, the secondary(hostname1) does not become the primary, and any further requests to the rails application raises the following error
Cannot connect to a replica set using seeds hostname2
Please help. Thanks in advance.
** UPDATE: **
I entered some loggers in the mongo repl_connection class, and came across this.
When I boot the rails app, I have both the hosts in the seeds array, that the mongo driver keeps track of. But during the second failover only the host that went down is present in this array.
Hence I would also like to know, how and when one of the hosts get removed from the seed list.

Two versions of same asp.net app using same server as stateserver - bad?

We have 2 production web servers for our web app, load balanced to handle lots of traffic.
We also have a similar setup for testing.
Test pool: [TEST 1]---[TEST 2]
Prod pool: [PROD 1]---[PROD 2]
When comparing the Web.Config of the app versions (test vs live) I discovered something surprising: both pools have the same value for stateConnectionString. If I understand right, this means they are using the same state server:
<sessionState
mode="StateServer"
stateConnectionString="tcpip=123.123.123.123:42424"
cookieless="false"
timeout="30"/>
Is this a problem? (How does the state server not confuse the two pools)?
I was having odd only-sometimes slowdown/errors on the test server, that's why I was looking at this in the first place, but the prod pool runs fine...
What this really means is that server 123.123.123.123 is the single source of all shared state for all servers on the web farm.
It's no different conceptually than storing the state in a centralized database, except in this case it's all stored on that one server in memory, and not a database.
I don't see anything wrong with it, per se..