AngularJS unit testing without relying on mocks for request data - testing

I've finally got my AngularJS testing environment setup and I'm trying to test out to see if the pages on my application are working. This includes templates, routes, requests, directives and so on.
From what I've discovered, when testing out a working application it turns out that mocks are required to do most of the work. While this is nice, I would still like to test out actual templates and data from my application.
Whenever a GET call is made within my application, I get an error that looks like:
Unexpected request: GET application/templates/home.html
No more request expected
Error: Unexpected request: GET application/templates/home.html
Turns out that it can't download the request properly. Which is fine. From what I think is going on the testing runner (testacular) is unable to download the template file since it's on a different environment all together (no HTTP address provided). The only solutions I've come across on how to fix this are to stub the request with a mock using the $httpBackend service. While this is useful for certain situations, I want to fetch the actual data.
Any idea on how to fix this?

You probably want to write two different kinds of tests:
Unit tests with mocks to test components in isolation, things like controllers, directives, etc
e2e tests that run your full stack, including data from your own server.
These two kinds of tests each require a separate testacular configuration.
Angular provides support for navigating the test browser and interacting with your application, as a user would. The explanation and API is documented here: http://docs.angularjs.org/guide/dev_guide.e2e-testing
My testacular e2e config and explanation is posted here:
https://stackoverflow.com/a/13410567/1739247
The important part for running the tests against your own server is proxies, as explained in that post.

Related

How can I make my TestCafe framework handle A/B experiments on web apps?

I have built an automation framework for testing our web app that runs as after each new deploy to our staging environment, as a regression pack. Now the issue is the tests fail whenever there's a new experiment that touches that specific part of the tests, e.g., the home page validation tests fail if there is a new home page experiment. I'd like to know how I can make my tests robust enough to resolve the issue maybe by ignoring experiments altogether or always ensuring the page loads in the current non-experiment group?
I thought maybe a possible solution would be for the web team to write a new cookie than controls the experiments, and then just set that cookie in a hook prior to my tests? Would that work or is there maybe a better way?
The solution with the cookie that controls your A/B experiments will work well with TestCafe. TestCafe allows you to work with cookies using the ClientFunctions mechanism or Client Scripts.

How do I run cucumber tests when testing an rest or graphql API

This is my first time playing with cucumber and also creating a suite which tests and API. My questions is when testing the API does it need to be running?
For example I've got this in my head,
Start express server as background task
Then when that has booted up (How would I know if that happened?) then run the cucumber tests?
I don't really know the best practises for this. Which I think is the main problem here sorry.
It would be helpful to see a .travis.yml file or a bash script.
I can't offer you a working example. But I can outline how I would approach the problem.
Your goal is to automate the verification of a rest api or similar. That is, making sure that a web application responds in the expected way given a specific question.
For some reason you want to use Cucumber.
The first thing I would like to mention is that Behaviour-Driven Development, BDD, and Cucumber are not testing tools. The purpose with BDD and Cucumber is to act as a communication tool between those who know what the system should do, those who write code to make it happen, and those who verify the behaviour. That’s why the examples are written in, almost, a natural language.
How would I approach the problem then?
I would verify the vast majority of the behaviour by calling the methods that make up the API from a unit test or a Cucumber scenario. That is, verify that they work properly without a running server. And without a database. This is fast and speed is important. I would probably verify more than 90% of the logic this way.
I would verify the wiring by firing up a server and verify that it is possible to reach the methods verified in the previous step. This is slow so I would do as little as possible here. I would, if possible, fire up the server from the code used to implement the verification. I would start the server as a part of the test setup.
This didn’t involve any external tools. It only involved your programming language and some libraries. The reason for doing it this way is that I want to to be as portable as possible. The fewer tools you use, the easier it gets to work with something.
It has happened that I have done some of the setup in my build tool and had it start a server before running the integration tests. This is usually more heavy weight and something I avoid if possible.
So, verify the behaviour without a server. Verify the wiring with a server. It is important to only verify the wiring in this step. The logic has been verified earlier, there is no need to repeat it.
Speed, as in a fast feedback loop, is very important. Building and testing the entire system should, in a good world, take seconds rather than minutes.
I have a working example if you're interested (running on travis).
I use docker-compose to launch the API & required components such as database, then I run cucumber-js tests against the running stack.
docker-compose is also used for local development & testing.
I've also released a library to help writing cucumber for APIs, https://github.com/ekino/veggies.

Capture web driver network traffic across all browsers

I want to capture all the network calls from Web Driver in Java. I am not doing any UI testing, just testing JS execution and, requests and responses of some network calls.
I tried using Browser Mob as is suggested in most forums, but I need it to work across all browsers. It worked flawlessly with Firefox, but I was facing some issues with the others. Safari driver doesn't event support a Proxy capability.
I don't want to use Fiddler as it involves some manual steps around invoking and storing the calls. Whereas, Browser Mob being an in-code proxy can be integrated in a more smoother fashion.
I also tried using the RC-like package included in Selenium standalone server package. But, I have some HTTPS calls and some nested iframes in cross domains. I am particularly interested in some cross domain POST call and it doesn't work out that well. Also, people keep saying it's not recommended to use that package.
So, I had a solution where we can use a standalone proxy server running on a machine. Using host entries, we'll point Web Driver to hit the proxy instead of the actual server. The proxy will record all the incoming calls and route them to the actual server host. Later, I can make a request to the proxy which will return me all the calls it intercepted. I am not sure whether it's still called a proxy or a router.
I came across TCPmon, but it's no longer being supported. Does anyone know some similar tools that could run on Unix systems or any alternate solutions?
We modified the Fiddler rules script to include a new exec action. If you use their native script editor, it also provide auto complete features and we were comfortably able to get around it. The syntax is similar to that of JavaScript.
The Fiddler package comes with a ExecActions.exe which can be used to pass console arguments to a running Fiddler instance using the command prompt.
The code we wrote processed all the sessions captured by Fiddler and wrote it to a file in a custom JSON format and later used GSON to deserialize it.
Please let me know, if you want further details.

How to do integration testing with front-end code and API in separate repositories?

Any strategies for doing this? We have a Rails codebase that is currently fully integrated (same app serves up JS assets as does the back-end heavy lifting), but are thinking of extracting the two out into separate services, each in their own git repository and running on separate servers.
I'm planning on unit/acceptance testing the API with a small ruby HTTP client that will also act as documentation for the API endpoints, and the JS front-end (Brunch.io, Backbone, Chaplin) will have unit/acceptance testing internally as well... but I feel like I should be writing cucumber tests that integrate the two together, right? Where do those cukes live? In which repo?
Appreciate any insight here. Thanks!
In a general sense, if you have code that is for both your server and your client, then the "right" place to keep it depends on which side of your app is more central or "heavier": the client or the server.
That being said, from the way your describe things in your question, it kind of sounds you consider the Rails app "primary". For instance, you mention that you currently have "JS assets" integrated/being served by your "Rails codebase" ... not Rails assets being served by your JS server ;-)
So that answers things on the theoretical level, but I also think it makes sense to put the code in your Rails codebase for a practical reason: Cucumber is a Rails tool, not a JS one. You might use it to test some non-Ruby code, but ultimately it's being run by Ruby.
I don't know for sure, but I suspect you'll create headaches for yourself if you try and put your Cucumber specs in your JS codebase, then try to run them from your Rails codebase. Plus, that really tightly couples the two codebases: to run your tests you need both codebases on your testrunner vs. if you keep the Cucumber stuff in Rails-land your test runner could just have your Rails code, and it could run against a different server that has your JS code.
So ultimately it sounds to me like the Cucumber stuff belongs in Rails-land ... but going the other way (and storing it with your JS repo) doesn't seem horrible to me either, just potentially more problematic.

Artifice for Objective-C?

Is there an Objective-C version of Artifice?
If not, how would I design/develop/create it?
Related Questions
Mock HTTP response via Objective-C
Mock NSURLConnection
I think I might be able to help you here.
I have a Ruby library that is somewhat similar to artifice, albeit more self-contained and built on top of Sinatra, called Mimic. I'm pretty happy with it and one of my favourite features is that as well as being configured using it's Ruby DSL (or using the Sinatra API directly), it can be configured remotely or from any process that speaks HTTP. This means you can use it in your Objective-C tests and configure it from the tests too (rather than having say, a set of external fixtures in a Ruby file).
In the name of eating my own dog food, I recently converted the acceptance tests for my Objective C RestClient port, Resty to use Mimic. The Mimic daemon is started up as part of the build process and my stubs are configured directly in the tests, using a thin Objective-C wrapper around the Mimic REST API.
As you can see, I strive very hard for test clarity!
Those tests use OCUnit but you can use this with Kiwi. In fact, the assertEventually macro in the above tests was the basis of the asynchronous testing support that I ported to Kiwi.
I've since extracted the Objective-C wrapper for Mimic from LRResty and moved it into the Mimic repository. You may want to check out the Resty project to see how my project and the tests are configured. If you have any questions, please ask.
One caveat: I haven't found a way of getting these tests to run successfully in Xcode 4, using the "Test" option, due to the way that it runs. In Xcode 3, I rely on Run Script build phases to start and stop the Mimic daemon, but because Xcode 4 doesn't run the tests as part of the build process this doesn't work. I've tried to accomplish something similar using pre/post test actions but unfortunately these are woefully inadequate due to various bugs.
Bonus tip: I find Charles Debugging Proxy as massive help when working with web services and you can use it with Mimic too; the Objective-C wrapper can be proxied through Charles so you can see exactly what is happening, both in terms of stub configuration and actual HTTP requests (Mimic can even be configured to return some helpful debugging data in the response headers).
Do let me know if you have any questions.