How to automate fuzzing in ZAP? - zap

We have a requirement as below to automate in ZAP
Go through POST request in ZAP tool
Identify values which got posted in Request tab
Highlight the value passed(for example: to textarea field) and right click > goto Fuzzer
Choose required injections like SQL Injection or RDF Injection etc.,
Add payload
Start fuzzer
Expected result would be to generate a comparison report of request before and after fuzzing is done.
Can this be automated in ZAP?

Currently the Fuzzer doesn't have a web API. Largely due to the fact that we're lacking user input on how such functionality should work and what their expectations for it might be.
Here's the existing issue you should provide your feedback on: https://github.com/zaproxy/zaproxy/issues/1689
There is an unfinished PR adding an initial implementation, you could pull the PR branch and build the add-on for testing purposes and to potentially encourage the submitting user to complete it: https://github.com/zaproxy/zap-extensions/pull/2222

Related

Restrict access of partial implmented API in Production

We need to develop an API which takes a CSV file as an input and persists them in database. Using vertical slicing we have split the reuirement into 2 stories
First story has partial implementation with no data validation
Second story completes the usecase by adding all validations.
Sprint-1 has first story and sprint-2 has second. After imlemneting first story in sprint-1 we want to release it to production. However, we dont want to make the API accessible to public which would be big security risk as invalid data could be inserted into database (story1 ignores validation)
What is the best strategy to release story1 at the end of sprint1 while addressing such security concerns?
We tried disbling the access via toggle flag such as ConfigCat. However, we dont want to implment something which is not required for actual implementation
is there really such a risk that in 1 sprint, someone may start using the API? And if you haven't added it to any documentation, how would they know of it's existance?
But let's say it is possible - what about using a feature toggle? When the toggle is activated, the end point spits out null or even a HTTP error code. Then you can enable to feature toggle when you're ready for people to start using the endpoint.

how to find a username with the github api

I've made an application that creates pull requests to update the dependencies in all of my org's repos when the repo "Alpha" gets a new tag. The process is triggered by our CI flow on Alpha. Other engineers here would like to upgrade this application so that whoever made the tag is also automatically added as a requested reviewer to all of the associated pull requests. I do not see any way to do this with the github REST api. So far I have:
GET tag by name -> tag object sha
GET tag (with obj sha) -> tagger name & tagger email
*************GAP**************
POST requested reviewer (with username) -> completed!
I can't see any good way to get a username from the REST api with the name and/or email. I could query commits from Alpha and filter them, BUT "person who tagged" != "person who made last commit AND I know that at least one of our more prolific taggers is sometimes logged in from different emails (web vs cli vs home machine, etc), so the app might miss them from time to time.
I think it may be possible to get what I want via the GraphQL api, but I'd really like to exhaust REST possibilities before I go down that road. Please shoot any ideas my way!
After gathering more information, it looks like it's possible, and even slightly more elegant than I anticipated. If I have the name of the tag (the 'ref'), I can get a specific commit with that rather than the SHA. the response for this commit includes author information that gives the login. I can then use this along with the pull number to request a reviewer.

ZAP: Mix manual browsing, active scanning and fuzzing for testing a very large Web application?

We've got a very large Web application with about 1000 pages to be tested (www.project-open.com, a project + finance management application for service companies). Each page may take multiple parameters (object-id, filters, column name to use for sorting, ...). We are now going to implement additional security checks on these parameters, so we need to systematically test that a) offensive parameter values are rejected and b) that the parameter values actually used by the application are accepted correctly.
Example: We might want to say that the sort_column parameter in a page should only consist of alphanumeric characters. But the application in reality may include a column name with a space in it, leading to a false positive security alert (space character not being an alphanumeric character).
My idea for testing this would be to 1) manually navigate to each of these pages in proxy mode, 2) tell ZAP to start spidering all links on this page for one or two levels and 3) tell ZAP to start fuzzing on these URLs.
How can this be implemented? I've got a basic understanding of ZAP and did some security testing of ]project-open[. I've read about a ZAP extension for scanning a list of URLs, but in our case we want to execute some specific ZAP actions on each of these URLs...
I'll summarise some of your options:
I'd start by using the ZAP desktop so that you can control it and see exactly what effect it has. You can launch a browser, explore you app and then active scan the urls you've found. The standard spider will find explore traditional apps very effectively but apps that make a lot of use of JavaScript will probably require the ajax spider.
You can also use the 'attack mode' which attacks everything that is in scope (which you define) that you proxy through ZAP. That just means the ZAP effectively just follows what you do and attacks anything new. If you dont explore part of your app then ZAP wont attack it.
If you want to implement your own tests then I'd have a look at creating scripted active scan rules. We can help you with those but I'd just start with exploring your app and running the default rules for now.

API Testing - Best Strategy to adopt

I have few questions for you. Let me give some details before I throw the questions.
Recently I have been involved in Rest api test automation.
We have a single rest api to fetch resources,( this api is basically used in our web app UI for different business workflows to get data)
though its the same resource api, the actual endpoint varies accordingly for different situations
.i.e the query parameters passed in the api url gets differed based on the business workflow.
For example something like
`./getListOfUsers?criteria=agedLessThanTen
../getListOfUsers?criteria=agedBetweenTenAndTwenty
../getListOfUsers?criteria=agedBetweenTwentyAndThirty etc
As there is only one api and as the business workflow do not demand it, we don't have chaining requests between apis
So the test is just hitting on individual endpoints and validating the response content.
The response are validated based on a supplied test data. So the test data file has list of users expecting when hit on each particular endpoint.
i.e the test file is like a static content which will be used to check the response content everytime we hit on it...
if the actual response retreived from server deviating with our supplied testdata,it will be a fialure.
(also tests are there for no content respose,with out auth etc)
This test is ok to confirm the endpoints are working and response content is good or not .
My actual questions are on the test strategy or business covergae here,
Is such single hit on the api end point is sufficient here or not..
or same endpoint should be hit again for other scenarios or not, especially when the above given example of endpoints actually complimenting each other
and a regression issues that might happen, can possible captured in anyof it ?
If api endpoints are complimenting each other, adding more tests , will it be just duplicate of tests/ more maintainance / and other problems later on and should we avoid it ?
if its not giving values ?
Whats the general trend on API automation regarding the covergae? . I beleive it should be utilzed to test the business integration flows, and scenarios if it demands
but for situations like this , is it really required
also should we keep this point in mind here ?, that automation is not to replace manual testing, but only to compliment it .
and attempt to automate every possible scenario is not going to give value and only will give maintaince trouble later ?
Thanks
Is such single hit on the api end point is sufficient here or not..
Probably not, for each one you would want to verify various edge cases (e.g. lowest and highest vales, longest string), negative tests (e.g. negative numbers where only positive allowed) and other tests according to the business and implementation logic.
Whats the general trend on API automation regarding the covergae?
...
automation is not to replace manual testing, but only to compliment it . and attempt to automate every possible scenario is not going to give value and only will give maintaince trouble later ?
If you build your test in a modular way then maintenance becomes less of an issue, you need to implement each API anyway and the logic and test data above that should be the less complicated part of the test system.
Indeed you usually want to have a test pyramid of many unit tests, some integration tests and fewer end to end tests, but in this case since there is no UI involved, the end user is just another software module, and execution time for REST APIs is relatively short and stability is relatively good then it is probably acceptable to have a wider end to end test layer.
I used a lot of conditionals above since it's only you that can evaluate the situation in light of the real system.
If possible consider generating test data on the fly instead of using hard coded values from a file, this will require a parallel logic implemented in your tests but will make maintenance and coverage an easier job.

If the client keeps on changing the requirements every now and then, then what testing method should be followed?

I always perform regression testing as soon as the changes come up. The case is the client comes up with changes or additional requirement every now and then and that makes the case more messy. I test something and then the whole things get changed. Again I have to test the changed module and perform integration testing with other modules that is linked with it.
How to deal with such cases?
1) 1st ask complete Clint requirement and note every small point in doc.
2) Understand that total functionality.
3) Use your default testing method.
4) Your not mention which type your testing.( app or portal )
5) As well as possible which is comfortable and feel easy you continue that testing.
6) You want automation testing.please use this (App-appium or Web-selenium)
I hope this is helpful for you.
I would suggest you the following things
->Initially gather all the requirements and check with the client if you have any queries through email.
->Document every thing in MOM whenever you have client call and share with everyone who has attended the call(dev team,client,business,QA)
->Prepare a Test Plan strategy document, test cases and share it to client and request him for his sign off.
->Once, you are all set start with smoke testing then check the major functionalities in that release and then could proceed further.
->You could automate the regression test cases as you are going to execute them for every release(I would suggest to use Selenium if its a desktop application then UFT).
Kindly, let me know if you have any queries.