Modify an http response in a protractor test - testing

I'm trying to write some end to end tests for our application's login process, but am having trouble getting my head around the best way to set up the scenario where the user needs to change his password.
When our server responds to a successful login, a user object is returned with a changePassword field. The client then inspects the response and redirects accordingly.
My problem is getting the test set up so that the changePassword field is set - what is the best approach to use?
I see my options as:
Have a test set up and tear-down script for the server that creates a brand new user specifically for the test run with changePassword flag set in the database.
This seems like the most end to end approach, but is probably also the most effort & code.
Somehow intercept the http response in the test and modify the changePassword flag to be set for this test only.
Mock the http response completely. Using this approach is the most removed from an end to end test, but is perhaps the simplest?
Which is the best or most common approach? Also any general pointers on how to actually implement the above (particularly 1 and 2) with protractor would be great - I'm finding it hard to conceptually get straight in my head, and hence hard to know what to search for.
I'm using protractor as the test framework, with angular.js powering the client side, and a node server running utilising (among other things) express.js and mongoDB.

Having thought about this further, option 1 is the best solution, but is not always possible.
Option 2 is also possible, and option 3 should be avoided.
For option two, a mock module can be created like so: (coffeescript)
e2eInterceptors =->
angular.module('e2eInterceptors', [])
.factory('loginInterceptor', ()->
response: (response)->
# Only edit responses we are interested in
return response unless response.match(/login/)
# do the modifiations
response.data.changePassword = true
# return the response
return response
)
.config(($httpProvider)->
$httpProvider.interceptors.push('loginInterceptor')
)
You can then inject this module into your tests using
browser.addMockModule('e2eInterceptors', e2eInterceptors)
If you want to do this globally, you can put this in the onPrepare function in your protractor file, otherwise just call it when needed in tests.

I think your first approach is the most appropriate.
It would be useful anyway to test the new user creation, so it is not a waste.
And for example this example seems to be something similar: http://product.moveline.com/testing-angular-apps-end-to-end-with-protractor.html

Related

Karate DSL: How to initiate a long running rest call, perform other rest calls, then assert on the first requests' response

I am learning Karate DSL in order to determine if it is a viable automation solution for our new API.
We have a unique environment in which we have a REST API to test, but also use REST services to perform other required actions while the original request is awaiting a response. The REST calls perform robotic actions to manipulate hardware, query our servers, etc.
We need the ability to send a REST request, perform various other REST requests (with assertions) while awaiting a response to the first request. Then, finally assert that the original request gets the correct response payload based on the actions performed between the first request and its response.
Rough example:
Feature: Async test
Background:
* def defaultAssertion = { success: true }
Given url 'http://foo/'
Scenario: Foo test
Given path 'start' <- start long running call
When method get
And request { externalId: 'id1'}
Given path 'robot-action' <- perform another call that resolves immediately
When method get
Then status 200
* match response contains deep defaultAssertion
Then status 200 <- somehow assert on first requests' response
* match response contains deep defaultAssertion
Obviously the example above does not work, but I am hoping we can structure our tests similarly.
I know tests can run in parallel, but I am not sure how to encapsulate them as "one test" vs "multiple tests" and control order (which is required for this to work properly).
There is documentation on Async behavior, but I found it difficult to follow. If anyone can provide more context on how to implement the example I would greatly appreciate it.
Any suggestions would be warmly welcomed and examples would be fantastic. Thanks all!
Actually being able to wait for an HTTP call to complete and do other things in the meantime is something Karate cannot do by default. This question has me stumped and it has never been asked for before.
The only way I think we can achieve it in Karate is to create a thread and then run a feature from there. There is a Java API to call a feature, but when you are doing all that, maybe you are better off using some hand-crafted Java code to make an HTTP request. Then your case aligns well with the example mentioned here: https://twitter.com/getkarate/status/1417023536082812935
So to summarize, my opinion of the best plan of action.
for this "special" REST call, be prepared to write Java code to make that HTTP call on a separate thread
call that Java code at the start of your Karate test (using Java interop)
you can pass the karate JS object instance
so that you can call karate.listen() when the long-running job is done
actually instead of the above, just use a CompletableFuture and a java method call (just like the example linked above)
Now that step won't block and you can do anything you want
After you have done all the other work, use the listen keyword or call a Java method on the helper you used at the start of your test (just like the linked example)
That should be it ! If you make a good case for it, we can build some of this into Karate, and I will think over it as well.
Feature: Async test
Background:
* def defaultAssertion = { success: true }
Given url 'http://foo/'
Scenario: Foo test
Given path 'start' <- start long running call
When method get
And request { externalId: 'id1'}
Given path 'robot-action' <- perform another call that resolves immediately
When method get
Then status 200
* match response contains deep defaultAssertion
Then status 200 <- somehow assert on first requests' response
* match response contains deep defaultAssertion

Is it possible to call a test feature from a MockServer feature(matched) in KarateDSL?

I'm trying to do something that I don't know if it is even remotely possible or not.
I've a Mock server, and I'd like that when it receives a given request, it "starts another test", calling a test feature. I tried some stuff, including the one bellow. But turns out that this Mockserver scenario do not respond.
Scenario: pathMatches('/ideas')
* def xx = call read('SimpleStart.feature')
* def response = $ideas.*
Is there an elegant way to make this work? AN workaround or a suggestion you can give me?
The use case is:
Perform tests, some tests, make some external services invoke the mockserver, and if the mockserver is requested it triggers other tests.
Thanks in advance.
Yeah Karate certainly isn't designed to do that. The pattern should be set up your mocks and tests from a Java "runner" for maximum control and that's what most teams do.
In short, "orchestrate" things from Java code.
That said, see if this gives you some other creative ideas: https://twitter.com/getkarate/status/1417023536082812935

ZAP automatically returning same response on breakpoint

I'm using OWASP ZAP as a proxy tool for testing mobile applications.
What I'm trying to do is make a breakpoint on some URL, and return custom response to test applications UI or functionality.
Currently, whenever breakpoint is triggered, I have to manually let request pass, and then change the response and let that one pass in order to see the change in the app. And when I have to do it multiple times, it's not really convenient.
Is it possible to make a breakpoint on a URL that will return some predefined response every time it is triggered?
If it's not possible, are you aware of any other tool that is?
Yes you can do that, but not with breakpoints - they are manual only.
Instead you can either use either :
Replacer
Scripts
The replacer is easier to set up but more restricted while scripts can do absolutely anything. There are example scripts for replacing scripts in response headers and bodies.

Optimizing Selenium tests by bypassing UI

Is there a way to bypass UI for those actions which need to be performed before and (or) after the test? Is it possible to send simple GET or POST requests to the same test session instead of writing the script in the test?
For example, I want to write a test which checks if the record can be deleted. To do that, first of all I need to create the record. It doesn't seem to be a good choice to do it through the UI since it is not part of the test itself.
It really depends on the application under test. You probably don't want to go making SQL calls to your database to create these records, unless you really know what you're doing. Even then, it will make your test automation break when that record changes.
Perhaps your application under test provides an API which will allow you to create a target record. That would be ideal, allowing you to make an API request then all you have to do in the UI is navigate to where the "user" would delete it.
You can do pretty much everything by executing some Javascript into the page.
Here is an example send an HTTP request with a Javascript call:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://www.google.com")
driver.execute_script("""
var r = new XMLHttpRequest();
r.open('POST', '/search', 0);
r.setRequestHeader('Content-type','application/x-www-form-urlencoded');
r.send('q=bill+material&output=xml&client=test&site=operations&access=p');
return r.responseText;
""")
While it may be tempting to setup a test this way, I wouldn't recommend it since it will create new dependencies to the UI, increase the complexity and therefore increase the cost of maintenance of the tests.

JMeter Tests and Non-Static GET/POST Parameters

What's the best strategy to use when writing JMeters tests against a web application where the values of certain query-string and post variables are going to change for each run.
Quick, common, example
You go to a Web Page
Enter some information into a form
Click Save
Behind the scenes, a new record is entered in the database
You want to edit the record you just entered, so you go to another web page. Behind the scenes it's passing the page a parameter with the Database ID of the row you just created
When you're running step 5 of the above test, the page parameter/Database ID is going to change each time.
The workflow/strategy I'm currently using is
Record a test using the above actions
Make a note of each place where a query string variable may change from run to run
Use a XPath or Regular Expression Extractor to pull the value out of a response and into a JMeter variable
Replace all appropriate instances of the hard-coded parameter with the above variable.
This works and can be automated to an extent. However, it can get tedious, is error prone, and fragile. Is there a better/commonly accepted way of handling this situation? (Or is this why most people just use JMeter to play back logs? (-;)
Sounds to me like your on the right track. The best that can be achieved by JMeter is to extract page variables with a regular expression or xpath post processor. However your absolutely correct in that this is not a scalable solution and becomes increasingly tricky to maintain or grow.
If you've reached is point then you may want to consider a tool which is more specialised for this sort of problem. Have a look web testing tool such as Watir, it will automatically handle changing post parameters; but you would still need to extract parameters if you need to do a database update but using Watir allows for better code reuse making the problem less painful.
We have had great success in testing similar scenarios with JMeter by storing parameters in JMeter Variables within a JDBC assertion. We then do our http get/post and use a BSF Assertion and javascript do complex validation of the response. Hope it helps