How to run rspec (or any other) tests in Kubernetes cluster - testing

I changed my dev workflow to mainly develop in Kubernetes (with Tilt.dev). All dependencies are running, file changes are syncing etc..
Now I stuck in the rSpec test process:
How do you run your tests in a Kubernetes cluster?
How do you deal the dev dependencies, the production image does not contain the required dependencies?
How do you kickoff the tests rspec spec?
Are you installing gems subsequently?
Are you creating a extra "test" image for that case?
I didn't found any bootstrap code. I am totally lost.

Related

Should be Cypress testing framework be installed separate from the testee project?

I have a big web project with a separate backend and a front-end (webpack). I'm going to use Cypress to create end-to-end tests.
What is not clear is where I should add the Cypress tests and Cypress itself. The documentation says to add it right to the testee project and it shows how to run the tests on the production website (which URL is different from the local, dev project). This means that I'm not able to run the tests on the development project because Cypress testing IDE and the testee project can't be run simultaneously because they share the same terminal.
If so, the best solution is probably to organize one more project, only for testing purposes, and having only Cypress installed and tests themselves? Is it a good practice and if so, which project should it be?
We have the same setup at work. We include the Cypress folder in the front-end repo. I'd agree with keeping it right next to the project because you have access to that code easily i.e. accessing utility functions, selectors, etc. As far as the terminal issue, you should be able to run your project locally in one terminal tab and the cypress test runner in another.

Codeship with Testim.io - Am I testing my latest version?

Sorry for the dumb question but I could not find an answer for that.
Codeship + Testim.io + Heroku.
In my Staging env - I use Testim.io to test the app once it deployed.
The following tutorial is guiding me how to invoke my tests - but I see the tests are being invoked BEFORE the new app has been deployed - so isn't it testing one version before my latest?
I expected the tests to run after the deploy.
Probably I am missing here something.
In that tutorial - the tests aren't supposed to run against your deployed version they are supposed to run against the version being tested.
The flow is:
You set up a local environment - for example by checking out your code and running npm start. If it's containerized then do that.
You run the Testim CLI and point its base url to local how testim --token ... --project ... --suite ... --base-url=localhost:PORT.
After the tests pass, you deploy.
If you test the version after you deploy you can't be sure the version deployed actually passes your tests.
An alternative flow would be to lean into Heroku's deployment model. Note this isn't actually specific to them and there are similar alternatives in aws/azure/gcp/whatever:
In your CI, you set up a staging environment in heroku heroku create --remote staging-BRANCH-NAME-<COMMIT-NAME>
You deploy there.
You run the tests against that enviroment (by passing --base-url to the Testim CLI, navigating there in your test or using a config file)
When tests pass you deploy to master

Build won't fail in Travis-CI, even though Selenium test fails

I'm building a project where we have to run end-to-end tests with Selenium like so: Run focused integration or End-to-end tests (e.g. Selenium). It is necessary to run this on an external staging server (e.g. Heroku). To run the integration test the application needs to connect to an external system e.g. database.
This very likely has something to do with our .travis.yml file, which looks something like this now (even though we have gone very back-and-forth with the file):
...
script:
- ./gradlew check
deploy:
provider: heroku
api_key:
secure: *****
app: *****
after_deploy:
- ./gradlew seleniumXvfb
Basically, what we want to do is first run ./gradlew check which runs unit tests, then deploy the application to heroku and finally run Selenium tests (end-to-end tests) on the staging server (heroku).
But, what happens is that travis doesn't seem to care that the selenium tests fails when it should fail. Travis shows the green checkmark for the build in whole, like everything is OK.
When this is all over, we want to deploy to a production server.
Thank you.
after_deploy currently doesn't fail the build in Travis CI.
If you want to test your application against a running staging system on Heroku, then I'd recommend deploying this manually as part of the before_script step and then running the ./gradlew seleniumXvfb command in your script section.
That way you can then do a proper production deployment based on the success of testing against your staging system.

TeamCity and Rails deployment

Can anyone please point me to a documentation/demo on how to deploy a Ruby on Rails web app using Teamcity once build passed? The scenario is to: deploy the web app, by starting the web server on the build machine and then firing UI functional tests. (Note: Would like to know if all these steps can be automated using Teamcity?)
You can use Capistrano to deploy with TeamCity. Capistrano is great for deploying Rails apps, you can automate it reasonably easily so that TeamCity simply fires your Capistrano job.
More on Capistrano https://github.com/capistrano/capistrano/wiki/
You probably want to use RVM with TC too.

Hadoop development environment, what yours looks like?

I would like to know what yours Hadoop development environment looks like?
Do you deploy jars to test cluster, or run jars in local mode?
What IDE do you use and what plugins do you use?
How do you deploy completed projects to be run on servers?
What are you other recommendations about setting my own Hadoop development/test environment?
It's extremely common to see people writing java MR jobs in an IDE like Eclipse or IJ. Some even use plugins like Karamasphere's dev tools which are handy. As for testing, the normal process is to unit test business logic as you normally would. You can unit test some of the MR surrounding infrastructure using the MRUnit classes (see Hadoop's contrib). The next step is usually testing in the local job runner, but note there a number of caveats here: the distributed cache doesn't work in local mode, and you're singly threaded (so static variables are accessible in ways they won't be in production). The next step (and most common test environment) is pseudo-distributed mode - all daemons running, but on a single box. This is going to run code in different JVMs with multiple tasks in parallel and will reveal most developer errors.
MR job jars are distributed to the client machine in different ways. Usually custom deployment processes are seen here. Some folks use tools like Capistrano or config management tools like Chef or Puppet to automate this.
My personal development is usually done in Eclipse with Maven. I build jars using Maven's Assembly plugin (packages all dependencies in a single jar for easier deployment, but fatter jars). I regularly test using MRUnit and then pseudo-distributed mode. The local job runner isn't very useful in my experience. Deployment is almost always via a configuration management system. Testing can be automated with a CI server like Hudson.
Hope this helps.