I'm currently working on making a few improvements to our selenium based UI tests. One feature I'm looking for is a reliable way for our website to detect what traffic is coming from our tests, so I can filter this traffic out of our browser usage metrics and logging.
One thought I had is to set a tracking cookie with selenium that I could read server side to append to my logs/metrics making it easier to filter it out. The challenge here is cookies are domain specific, and as far as I know wouldn't be readable from other sites. Cookies are also a finite resource, and given the size/distributed nature of our website it's quite possible to run into a situation where this could blow the size limit on cookies/headers and cause issues in the page.
Is this my best option, or is there another reliable way to detect from my webserver if my page is being automated with selenium. (I'm not trying to combat bots, we have other systems in place to guard against DoS/DDoS attacks.
When using Chrome, the Selenium driver injects a webdriver property into the browser’s navigator object. This means that for me, adding the following js to my page redirected it to StackOverflow:
if (navigator.webdriver == true) {
window.location = "https://stackoverflow.com";
}
So I guess just replace window.location = "https://stackoverflow.com"; with whatever you want, I'm guessing logging the requests somewhere or somehow excluding them from whatever tool you use to measure traffic.
So, the server obviously needs some token that tells it that a session is selenium based. Given that, here is what I would do.
Create a super simple API on your server. Have that API take the session token of the logged in used and pass that in the API (almost always automatic). When the API receives that session token, mark something in the database (new table or same table that stores session ID's if any).
Have the API flag the session as a test session, and thus not valid for metrics.
This is not a statistically significant impact on any server, so there is no worry about resources or impact.
Should take a very simple code-behind API, a very lightweight table that could simply have a single column with a foreign key to the session id involved. All inserted session IDs in this table, by virtue of existing here, are test sessions.
And then, your metrics recording will need to add a single clause to a query that has effectively "WHERE (SELECT COUNT(sessionId) FROM TestSessionsTable WHERE sessionId = currentIdChecked) = 0"
And that would give you what you need. I am happy to be told of a better solution, but this strikes me as the simplest effort, with the least impact on resources.
As for detecting Webdriver sessions on the client side, you can either use C. Peck's suggestion, or directly call the API from your automation run using the WebDriver's Javascript Executor logic.
Related
We've got a very large Web application with about 1000 pages to be tested (www.project-open.com, a project + finance management application for service companies). Each page may take multiple parameters (object-id, filters, column name to use for sorting, ...). We are now going to implement additional security checks on these parameters, so we need to systematically test that a) offensive parameter values are rejected and b) that the parameter values actually used by the application are accepted correctly.
Example: We might want to say that the sort_column parameter in a page should only consist of alphanumeric characters. But the application in reality may include a column name with a space in it, leading to a false positive security alert (space character not being an alphanumeric character).
My idea for testing this would be to 1) manually navigate to each of these pages in proxy mode, 2) tell ZAP to start spidering all links on this page for one or two levels and 3) tell ZAP to start fuzzing on these URLs.
How can this be implemented? I've got a basic understanding of ZAP and did some security testing of ]project-open[. I've read about a ZAP extension for scanning a list of URLs, but in our case we want to execute some specific ZAP actions on each of these URLs...
I'll summarise some of your options:
I'd start by using the ZAP desktop so that you can control it and see exactly what effect it has. You can launch a browser, explore you app and then active scan the urls you've found. The standard spider will find explore traditional apps very effectively but apps that make a lot of use of JavaScript will probably require the ajax spider.
You can also use the 'attack mode' which attacks everything that is in scope (which you define) that you proxy through ZAP. That just means the ZAP effectively just follows what you do and attacks anything new. If you dont explore part of your app then ZAP wont attack it.
If you want to implement your own tests then I'd have a look at creating scripted active scan rules. We can help you with those but I'd just start with exploring your app and running the default rules for now.
I am developing an identity server 4 dotnet core application so this is as much as a dotnet question than and IDS4 question. One example of state I need to maintain between pages (login, signup etc...) is the returnUrl. The application I'm migrating from used to store it in a session variable but, as I understand, unless I run a persistent session strategy, this won't scale well.
So currently, I'm passing it around as a field in each View Model used by each view so it can be returned. Is this a sound approach? I'll be needing other fields to be passed around as well so I'm wondering whether this is a secure and logical way to do it.
So currently, I'm passing it around as a field in each View Model used by each view so it can be returned. Is this a sound approach?
Yes, how you choose to pass it around is up to you, I choose this same approach. You could use TempData, Sessions or even localStorage as an alternative. I think having it in the models (view models) is a good approach because you are explicitly specifying where you want the return url to exist, otherwise it might persist in context that you wouldn't want.
Now the security question because obviously you might be able to see the return url in the browser address field.
As part of Identity Server 4 setup you specify which return url's you are allowed to redirect back to, so I don't think there is any harm in having the users see the redirect url.
Something to consider is what if the user would share the url to someone else in the middle of the authentication process, would they be able to resume from that part of the process that the initial user has stopped? is this something you want in your app?
If you mean reliably instead of securely, write tests which will provide you with confidence that your code works.
I have few questions for you. Let me give some details before I throw the questions.
Recently I have been involved in Rest api test automation.
We have a single rest api to fetch resources,( this api is basically used in our web app UI for different business workflows to get data)
though its the same resource api, the actual endpoint varies accordingly for different situations
.i.e the query parameters passed in the api url gets differed based on the business workflow.
For example something like
`./getListOfUsers?criteria=agedLessThanTen
../getListOfUsers?criteria=agedBetweenTenAndTwenty
../getListOfUsers?criteria=agedBetweenTwentyAndThirty etc
As there is only one api and as the business workflow do not demand it, we don't have chaining requests between apis
So the test is just hitting on individual endpoints and validating the response content.
The response are validated based on a supplied test data. So the test data file has list of users expecting when hit on each particular endpoint.
i.e the test file is like a static content which will be used to check the response content everytime we hit on it...
if the actual response retreived from server deviating with our supplied testdata,it will be a fialure.
(also tests are there for no content respose,with out auth etc)
This test is ok to confirm the endpoints are working and response content is good or not .
My actual questions are on the test strategy or business covergae here,
Is such single hit on the api end point is sufficient here or not..
or same endpoint should be hit again for other scenarios or not, especially when the above given example of endpoints actually complimenting each other
and a regression issues that might happen, can possible captured in anyof it ?
If api endpoints are complimenting each other, adding more tests , will it be just duplicate of tests/ more maintainance / and other problems later on and should we avoid it ?
if its not giving values ?
Whats the general trend on API automation regarding the covergae? . I beleive it should be utilzed to test the business integration flows, and scenarios if it demands
but for situations like this , is it really required
also should we keep this point in mind here ?, that automation is not to replace manual testing, but only to compliment it .
and attempt to automate every possible scenario is not going to give value and only will give maintaince trouble later ?
Thanks
Is such single hit on the api end point is sufficient here or not..
Probably not, for each one you would want to verify various edge cases (e.g. lowest and highest vales, longest string), negative tests (e.g. negative numbers where only positive allowed) and other tests according to the business and implementation logic.
Whats the general trend on API automation regarding the covergae?
...
automation is not to replace manual testing, but only to compliment it . and attempt to automate every possible scenario is not going to give value and only will give maintaince trouble later ?
If you build your test in a modular way then maintenance becomes less of an issue, you need to implement each API anyway and the logic and test data above that should be the less complicated part of the test system.
Indeed you usually want to have a test pyramid of many unit tests, some integration tests and fewer end to end tests, but in this case since there is no UI involved, the end user is just another software module, and execution time for REST APIs is relatively short and stability is relatively good then it is probably acceptable to have a wider end to end test layer.
I used a lot of conditionals above since it's only you that can evaluate the situation in light of the real system.
If possible consider generating test data on the fly instead of using hard coded values from a file, this will require a parallel logic implemented in your tests but will make maintenance and coverage an easier job.
I am using rspec/capybara here.
I'd like to be able to log into the system only once, then run a bunch of scenarios. Should a scenario fail, it can effectively move onto the next one.
The problem is that once a scenario fails, a new browser session is started and I am asked to log in again. Is there a way around this?
How is this type of testing handled? Many systems require a user to log in first prior to exercising all its functions/features.
So many ways to achieve this but I myself prefer a new instance per spec at least if not context or sometimes even it. I like atomic self contained tests.
Anyway, if you decide want to do this, then you could;
Reuse a cookie session between tests but still open a new browser. Obviously this depends upon the system under test
Create a global Before all which only creates a browser and sign in if you are not already signed in.
Create a global After all which navigates to a known state (eg. Home page) but doesn't log out.
There are many approaches which could work
Why (other than moral reasons) don't more people use the CAPTCHAs of other sites as their own while selling the solving of said CAPTCHAs?
To me, such a system seems like it would be simple to implement:
set up a script that does something on another website that requires a CAPTCHA to be completed through the use of a proxy service
when a user on your site performs a task that requires the completion of a CAPTCHA, simply serve them the CAPTCHA that the other
site asks you to solve
when the user solves the CAPTCHA, your script can perform the desired action on the other site that is the source of the CAPTCHA,
and the user on your site is also verified through this process
Is this commonplace? If not, why not? What, if anything, could be done to prevent this?
Fetching the captcha. Assuming one could easily fetch the exact visual of the captcha from the foreign host. To do this, you have to pass the referral check (most browsers (navigated by humans) allow to send the http_referer). You also would have to save the session_id and the secret from the hidden input.
Checking the result. The foreign host must link the saved variables with the ones associated with the session of your first request, which requires you to implement tricky cURL methods. You would have to handle multiple parallel requests, all from your single ip.
Your server will probably use more resources when hacking a captcha on a foreign host than if it generates a captcha on its own.
Prevents
http_referer check
limit requests for single IP to e.g. 5 / minute
good session handling and tricky cookies
it's not impossible to reverse engineer javascript, but the more complicated your javascript is, ...
you have to find a pattern that recognizes the result on the foreign host. the easiest signature may be the Location header field, leading either to /path/success.html or /path/tryagain.php
Challenge:
I took a moment to prepare an example: http://woisteinebank.de/test/
In this example, I attach keys to the session_id(); and save it in the database.
Through session_regenerate_id(); I have a fresh session on every request.
In check.php, I compare the database values to the $_GET values.
Try to find a way to get leech this captcha, I'll try to defend. Everytime you sucessfully use my captcha on your site, I try to defend it.