How to do integration testing with front-end code and API in separate repositories? - ruby-on-rails-3

Any strategies for doing this? We have a Rails codebase that is currently fully integrated (same app serves up JS assets as does the back-end heavy lifting), but are thinking of extracting the two out into separate services, each in their own git repository and running on separate servers.
I'm planning on unit/acceptance testing the API with a small ruby HTTP client that will also act as documentation for the API endpoints, and the JS front-end (Brunch.io, Backbone, Chaplin) will have unit/acceptance testing internally as well... but I feel like I should be writing cucumber tests that integrate the two together, right? Where do those cukes live? In which repo?
Appreciate any insight here. Thanks!

In a general sense, if you have code that is for both your server and your client, then the "right" place to keep it depends on which side of your app is more central or "heavier": the client or the server.
That being said, from the way your describe things in your question, it kind of sounds you consider the Rails app "primary". For instance, you mention that you currently have "JS assets" integrated/being served by your "Rails codebase" ... not Rails assets being served by your JS server ;-)
So that answers things on the theoretical level, but I also think it makes sense to put the code in your Rails codebase for a practical reason: Cucumber is a Rails tool, not a JS one. You might use it to test some non-Ruby code, but ultimately it's being run by Ruby.
I don't know for sure, but I suspect you'll create headaches for yourself if you try and put your Cucumber specs in your JS codebase, then try to run them from your Rails codebase. Plus, that really tightly couples the two codebases: to run your tests you need both codebases on your testrunner vs. if you keep the Cucumber stuff in Rails-land your test runner could just have your Rails code, and it could run against a different server that has your JS code.
So ultimately it sounds to me like the Cucumber stuff belongs in Rails-land ... but going the other way (and storing it with your JS repo) doesn't seem horrible to me either, just potentially more problematic.

Related

How to test/debug cross-platforms desktop apps(Windows, MacOS) with limited resources

I am trying to build a desktop app.
I am thinking of using electron on the recommendation of a web-developer friend of mine, but as I am the only sole developer, I don't have the means to test the software on different platforms(OS, hardware etc.).So I am anticipating that this will cause a problem later, in the end, to test/debug software on different platforms and different OS.
I have ruled out web-apps because of some privacy concerns of the users for the remote data hosting.
Software is pretty lightweight and is almost equivalent to the image viewer apps with some slight modifications.
How to solve the problem of variations of different platforms?
Any literature suggestions pointing me in the general direction are also welcome.
Sometimes it helps to think of Electron as two processes.
The renderer vs the main processes. Generally the renderer process which runs the HTML/CSS/JS is it's own isolated component, and you communicate to the main process using IPC.
So generally for the UI, you can use mostly any web based testing framework to test reliability. At Amna, for example, we use Cypress as our E2E testing platform. You an also use something like QAWolf. Both should work with localhost. In general, most website testing tools should work fine, and consistently across platforms.
Where this gets tricky is when a UI functionality makes a call to the OS or the main process. For example, saving to the disk, or launching a program.
The general flow is this, and I've yet to find radically simpler options:
Set-Up a VM or buy a machine with the corresponding OS. I used Spot VMs in Azure for this.
Manually test the scenarios you care about in each VM before you ship
If you have a lot of cases that rely on the OS, then you should be able to further optimize this by using an automated test runner like Spectron.
From experience, what I've realized is that most of the iterations I do happen more on the UI than the underlying functions with the cross-platform capabilities. And if your code has good separation (e.g. contextIsolation:true, nodeIntegration:false), it should be pretty obvious when you need to do an entire "cross-platform" test vs just UI tests.
I'm not familiar with a lot of large-scale electron testing frameworks, I do know that ToDesktop handles package building and generating binaries to perform a smoke test and verify things open across different operating systems.
It depends.
The answer depends on what you are building, so it makes sense to figure out what you actually want to build. Some questions you might ask yourself:
Do I need a database?
Do I need authentication?
Do I need portability?
Do I need speed to market?
Do I want to pick a language I'm familiar in?
These are all good questions and there are dozens more we all ask ourselves. However, back to your original question.
Electron is a fine choice
Yes, there are alternatives. But Electron is used for Visual Studio Code, Facebook Messenger, Microsoft Teams and Figma. Choosing Electron means there are other developers making apps and there are proven apps in the market so you don't have to worry about a dead ecosystem.
Electron is easy to onboard if you know web technologies, think js, html and css. If you know these, you can transfer your web dev knowledge and make a cross-platform app. You don't have to worry about learning each OS since the UI is the webpage which will look mostly* the same between each OS. (*some very minor differences, but essentially the same).
Cross-platform deployment is easy
There are a few ways of bringing your app to multiple platforms, I happen to be most familiar with electron-builder, but the other two solutions work as well.
Many templates to start with
I am biased, since I'm the author of secure-electron-template which is one of the many templates you can choose from when starting an app. However, I recently reviewed all Electron templates and found that only 4 do not have serious security vulnerabilities.
The Electron framework frequently is updated, and over the course of the past few years there has been a shift in the way Electron apps are made. Some earlier frameworks didn't have good secure defaults which some of the older Electron templates inherited and thus, aren't as secure as new frameworks that follow security guidelines.
If you decide on Electron, give my template a try. It's got a number of features I'm building out in order to help the community with features they might want (ie. internationalization (i18n), saving local data, custom context menus, page routing, e2e unit testing, and how one can use license key validation, to name a few things).

Running integration/e2e tests on top of a Kubernetes stack

I’ve been digging a bit into the way people run integration and e2e tests in the context of Kubernetes and have been quite disappointed by the lack of documentation and feedbacks. I know there are amazing tools such as kind or minikube that allow to run resources locally. But in the context of a CI, and with a bunch of services, it does not seem to be a good fit, for obvious resources reasons. I think there are great opportunities with running tests for:
Validating manifests or helm charts
Validating the well behaving of a component as part of a bigger whole
Validating the global behaviour of a product
The point here is not really about the testing framework but more about the environment on top of which the tests could be run.
Do you share my thought? Have you ever experienced running such kind of tests? Do you have any feedbacks or insights about it?
Thanks a lot
Interesting question and something that I have worked on over the last couple of months for my current employer. Essentially we ship a product as docker images with manifests. When writing e2e tests I want to run the product as close to the customer environment as possible.
Essentially to solve this we have built scripts that interact with our standard cloud provider (GCloud) to create a cluster, deploy the product and then run the tests against it.
For the major cloud providers this is not a difficult tasks but can be time consuming. There are a couple of things that we have learnt the hard way to keep in mind while developing the tests.
Concurrency, this may sound obvious but do think about the number of concurrent builds your CI can run.
Latency from the cloud, don't assume that you will get an instant response to every command that you run in the cloud. Also think about the timeouts. If you bring up a product with lots of pods and services what is the acceptable start up time?
Errors causing build failures, this is an interesting one. We have seen errors in the build due to network errors when communicating with our test deployment. These are nearly always transitive. It is best to avoid these making the build fail.
One thing to look at is GitLab are providing some documentation on how to build and test images in their CI pipeline.
On my side I use travis-ci. I build my container image inside it, then run k8s with kind (https://kind.sigs.k8s.io/) inside travis-CI, and then launch my e2e tests.
Here is some additional information on this blog post: https://k8s-school.fr/resources/en/blog/k8s-ci/
And the scripts to install kind inside travis-ci in 2 lines: https://github.com/k8s-school/kind-travis-ci.git. It allows lots of customization on the k8s side (enable psp, change CNI plugin)
Here is an example: https://github.com/lsst/qserv-operator
Or I use Github Actions CI, which allows to install kind easily: https://github.com/helm/kind-action and provide plenty of features, and free worker nodes for open-source projects.
Here is an example: https://github.com/xrootd/xrootd-k8s-operator
Please note that Github action workers may not scale for large build/e2e tests. Travis-CI scales pretty well.
In my understanding, this workflow coud be moved to an on-premise gitlab CI where your application can interact with other services located inside your network.
One interesting thing is that you do not have to maitain a k8s cluster for your CI, kind will do it for you!

How do I run cucumber tests when testing an rest or graphql API

This is my first time playing with cucumber and also creating a suite which tests and API. My questions is when testing the API does it need to be running?
For example I've got this in my head,
Start express server as background task
Then when that has booted up (How would I know if that happened?) then run the cucumber tests?
I don't really know the best practises for this. Which I think is the main problem here sorry.
It would be helpful to see a .travis.yml file or a bash script.
I can't offer you a working example. But I can outline how I would approach the problem.
Your goal is to automate the verification of a rest api or similar. That is, making sure that a web application responds in the expected way given a specific question.
For some reason you want to use Cucumber.
The first thing I would like to mention is that Behaviour-Driven Development, BDD, and Cucumber are not testing tools. The purpose with BDD and Cucumber is to act as a communication tool between those who know what the system should do, those who write code to make it happen, and those who verify the behaviour. That’s why the examples are written in, almost, a natural language.
How would I approach the problem then?
I would verify the vast majority of the behaviour by calling the methods that make up the API from a unit test or a Cucumber scenario. That is, verify that they work properly without a running server. And without a database. This is fast and speed is important. I would probably verify more than 90% of the logic this way.
I would verify the wiring by firing up a server and verify that it is possible to reach the methods verified in the previous step. This is slow so I would do as little as possible here. I would, if possible, fire up the server from the code used to implement the verification. I would start the server as a part of the test setup.
This didn’t involve any external tools. It only involved your programming language and some libraries. The reason for doing it this way is that I want to to be as portable as possible. The fewer tools you use, the easier it gets to work with something.
It has happened that I have done some of the setup in my build tool and had it start a server before running the integration tests. This is usually more heavy weight and something I avoid if possible.
So, verify the behaviour without a server. Verify the wiring with a server. It is important to only verify the wiring in this step. The logic has been verified earlier, there is no need to repeat it.
Speed, as in a fast feedback loop, is very important. Building and testing the entire system should, in a good world, take seconds rather than minutes.
I have a working example if you're interested (running on travis).
I use docker-compose to launch the API & required components such as database, then I run cucumber-js tests against the running stack.
docker-compose is also used for local development & testing.
I've also released a library to help writing cucumber for APIs, https://github.com/ekino/veggies.

Integration testing of func in OSGI container

I'm using FuseESB to run my app, which is essensially OSGI container (Felix), i'd like to figure approach to test my OSGI services in integration mode (including outer dependencies like DB, outer services, etc). First on a thought is ability to run specific bundle into container which involve all app services into running tests defined in this bundle. Can somebody help with that kind of issue? THANKS!
There are differnt ways of testing this.
Since FuseESB is based on Apache Karaf you might test with the apache karaf-pax-exam tools to test a complete container setup automatically.
Another way of just testing your OSGi bundles in a OSGi container is to use pax-exam directly. Last but not least if you just want to test your service look-up functionality you might test with pojosr, it's quite nice for testing but has it's limits especially if you depend on container features.
That said you'll find information at the following pages:
Pax-Exam
Apache Karaf
sample how Pax-Web uses pax-exam in its iTests
You may find http://www.javabeat.net/2011/11/how-to-test-osgi-applications/ helpful as an overview of the various OSGi test options. Configuring PAX-Exam to pull in your whole FuseESB container and get all your app services present will involve certain challenges, but once you've got the knack it can be very handy.
bndtools as the possibility to do JUnit tests inside the container.

Artifice for Objective-C?

Is there an Objective-C version of Artifice?
If not, how would I design/develop/create it?
Related Questions
Mock HTTP response via Objective-C
Mock NSURLConnection
I think I might be able to help you here.
I have a Ruby library that is somewhat similar to artifice, albeit more self-contained and built on top of Sinatra, called Mimic. I'm pretty happy with it and one of my favourite features is that as well as being configured using it's Ruby DSL (or using the Sinatra API directly), it can be configured remotely or from any process that speaks HTTP. This means you can use it in your Objective-C tests and configure it from the tests too (rather than having say, a set of external fixtures in a Ruby file).
In the name of eating my own dog food, I recently converted the acceptance tests for my Objective C RestClient port, Resty to use Mimic. The Mimic daemon is started up as part of the build process and my stubs are configured directly in the tests, using a thin Objective-C wrapper around the Mimic REST API.
As you can see, I strive very hard for test clarity!
Those tests use OCUnit but you can use this with Kiwi. In fact, the assertEventually macro in the above tests was the basis of the asynchronous testing support that I ported to Kiwi.
I've since extracted the Objective-C wrapper for Mimic from LRResty and moved it into the Mimic repository. You may want to check out the Resty project to see how my project and the tests are configured. If you have any questions, please ask.
One caveat: I haven't found a way of getting these tests to run successfully in Xcode 4, using the "Test" option, due to the way that it runs. In Xcode 3, I rely on Run Script build phases to start and stop the Mimic daemon, but because Xcode 4 doesn't run the tests as part of the build process this doesn't work. I've tried to accomplish something similar using pre/post test actions but unfortunately these are woefully inadequate due to various bugs.
Bonus tip: I find Charles Debugging Proxy as massive help when working with web services and you can use it with Mimic too; the Objective-C wrapper can be proxied through Charles so you can see exactly what is happening, both in terms of stub configuration and actual HTTP requests (Mimic can even be configured to return some helpful debugging data in the response headers).
Do let me know if you have any questions.