Can Zap be used as a DAST tool via API without spidering? - zap

I'm trying to use Zap as a DAST tool via the API and it's getting a bit annoying.
Can i use the tool as an attack tool instead of a proxy tool? what i mean is, currently i can't launch an active scan without the url being in the tree, which is only done via the spider afaik right?
What i want is to provide the url and launch an active scan based on a policy and get results, now that i think about it, this is similar to fuzzing just with attack vectors, although i see the logic of what to do with URL X if there is no history or scanning done, can't it just scan the page for actions and variables? the main difference is page\url scanning contrary to spidering which assumes there are other urls.
After writing this i'm not sure it can be done without a spider unless you're in my situation so let me explain it.
Lets say for example's sake i just want to scan the login page for SQLI and i'm using Owasp JuiceShop to make things easier, can i tell zap to attack the one page? the only way i found on that example is via the POST method since the url is not a static page and isn't being pick up by Zap unless it's an action, but then i can't launch it without spidering so this is like a loop.
Sorry for the long post hopefully you can provide some insights.
Update in comments

ZAP has to know about the site its going to attack. We deliberately separate the concepts of discovery and attacking because theres no one discovery option thats best for all. You can use the standard spider, the ajax spider, import urls, import defns like OpenAPI, proxy your browser, proxy regression tests or even make direct requests to the target site via the ZAP API.
It looks like you have quite a few questions about ZAP. The ZAP User Group is probably a better forum for them: https://groups.google.com/group/zaproxy-users

Related

Utilizing ZAP for RESTAPI testing

I'm curious as to how ZAP can be used to test RESTAPIs in the context of API security. Is it just the OpenAPI add on that can be used or are there other(more effective) methods?
Theres a ZAP FAQ for that :) https://www.zaproxy.org/faq/how-can-you-use-zap-to-scan-apis/ :
ZAP understands API formats like JSON and XML and so can be used to scan APIs.
The problem is usually how to effectively explore the APIs.
There are various options:
If your API has an OpenAPI/Swagger definition then you can import it using the OpenAPI add-on.
If you have a list of endpoint URLs then you can import these using the Import files containing URLs add-on.
If you have regression tests for you API then you can proxy these through ZAP
The add-ons are available from the ZAP Marketplace.
Once ZAP knows about the URL endpoints it can scan them in the same way as it scans HTML based web sites.
If you don't have any of these things then post to the ZAP User Group explaining what you are trying to do and the problems you are having.
For more details see the blog post Scanning APIs with ZAP.
Also the good idea is using Fuzzer from OwaspZap.
Fuzzing allows you to trigger an unexpected behavour from API server by submitting malformed requests, malformed parameters and guessing unpublished API methods.
You can read what is fuzzing here:https://owasp.org/www-community/Fuzzing
It will allow you to fuzz URL string or a single parameter.
To start fuzzer you will need to:
Right click on the request -> attack -> Fuzz..
Highlight the parameter you want to use and click "Add" button
Click Add in the new payloads window, choose the appropriate option and click add payload.
I would recommend to choose "file fuzzers" options at step 3 and choose one of the pre-defined wordlists, or export your own one. You can use Seclists to find a bunch of fuzzing wordlists. Here is the set of wordlists designed for API fuzzing https://github.com/danielmiessler/SecLists/tree/master/Discovery/Web-Content/api
Furthermore, OwaspZap allows you to perform manual API testing if you know the methodology.
Here you can find some links related to REST security:
https://cheatsheetseries.owasp.org/cheatsheets/REST_Security_Cheat_Sheet.html
https://cheatsheetseries.owasp.org/cheatsheets/REST_Assessment_Cheat_Sheet.html

How to call Google NLP Api from a Google Chrome extension

My aim is to select some text from a web page, start a google chrome extension and give the text to a google cloud api (Natural Language API) in my case.
I want to do some sentimental analysis and then get back the result to mark/ highlight positive sentences in green and negative ones in red.
I am new to this and do not know how to start.
The extension consists of manifest, popup etc. How should I call an API from there that does Natural Language Processing?
Should I create a Google Cloud Application with an API_KEY to call? In that case I would have to upload my credentials right?
Sorry sounds a bit confusing I know but I just don't know how I can bring this 2 things together an would be more than happy about any help
The best way to authenticate your app will depend on the specific needs and use cases of your application. You can see an overview of all the different methods here.
If you are not planning on identifying users nor on using a back end server that handles authenticating (as I assume to be your case), the best option would indeed be to use API keys. They do not identify the user, but are enough for the Natural Language APIs.
To do this you will need to create an API key for the services you want and add the necessary restrictions to make the key as secure as possible. Detailed instructions on how to do this and how to use the key in a url can be found here.
The API call could be made from within the Chrome extension with any JavaScript method capable of performing POST requests. For example using XMLHttpRequest or the Fetch API. You can find an example of the parameters that need to be included in the request here.
You may run into CORS issues when making the request directly from the extension. I recommend reading this answer, where a couple of workarounds for these issues are suggested.

Rightmove API and scraping technical and legal

I'm looking to build an app using property data. Nestoria has a free API and rules of use and Zoopla an API you register for. OnTheMarket and Rightmove have same terms of use to the letter (bizarre for competitors?). Rightmove advertise an API for upload but not download - I can't find anything for OnTheMarket.
I've discovered that Rightmove does have an API although the post code search is obfuscated by their own outcode mappings...
https://api.rightmove.co.uk/api/sale/find?index=0&sortType=1&numberOfPropertiesRequested=2&locationIdentifier=OUTCODE%5E1&apiApplication=IPAD
I'm wary of using an API that's not promoted. The alternative is scraping, which is harder technically and legally questionable, although from what I read the data is in the public domain and so free to use.
I've contacted Rightmove but got no response.
Is anyone using the Rightmove api and had this authorised by them? Seems most strange that it's open and available but barely mentioned when searching for it.
Can anyone clarify what rules/law/ethics are in place for scraping data?
Don't query their hidden API. But you can run a web crawler on RightMove.co.uk website, and it is perfectly legal as outlined in their Terms of Service under section 3.3 :
You must not use or attempt to use any automated program unless the automated program identifies itself uniquely in the User Agent field and is fully compliant with the Robots Exclusion Protocol
A web crawler like Apache Nutch perfectly follows the Robots Exclusion Protocol. From their robots.txt file I found they have elaborate nested sitemap.xml files, and hence they rather promote organized but polite crawling of their website. I was myself wanting to get their data, so I am beginning with my endeavour to crawl them with my resources - do let me know if you need access to this data.
You are not allowed to scrape their data, here what their terms&conditions say about it:
"You must not use or attempt to use any automated program (including, without limitation, any spider or other web crawler) to access our system or this Site. You must not use any scraping technology on the Site. Any such use or attempted use of an automated program shall be a misuse of our system and this Site. Obtaining access to any part of our system or this Site by means of any such automated programs is strictly unauthorised."

goo.gl shortening api: shorten via GET request

Is it possible to shorten a URL using the Goo.gl shortening api with a GET request? Their only instructions are for POST and it doesn't make much sense that they wouldn't have a way to do this via GET.
It's actually unlikely that they support GET to do that. Good practice requires that GET requests not cause side effects (permanent data changes) in web applications. This prevents problems related to web spiders causing havoc simply by trying to crawl a site (imagine a "delete" button that worked with a GET, causing a spider to inadvertently remove content).
Additionally, GET requests are a lot easier to force a third party to do (i.e. embed the url in an image tag on a forum) which often is a security problem. In the case of goo.gl, it would allow trivial and hard to block DoS type attacks on the service.

How would you go about making an application that automatically retrieves your bank account balance twice a day?

I'm building a utility that will hopefully keep my wife in tune with how much money we have available.
I need a simple secure way of logging into my bank account and retrieving the balance.
Something like mechanize is the only method I can think of. I'm not even sure if that would work given the properly authenticated https that banks use.
Any ideas?
Write a perl script using LWP::UserAgent. It supports HTTPS connections. The only issue might be if the site requires javascript.
Web Client Programming with Perl has a few examples to get you started if you're not too familiar with perl.
If you really want to go there, get these extensions for Firefox: Live HTTP Headers, Firebug, FireCookie, and HttpFox. Also download cURL and a scripting language that can run cURL command-line tasks (or a scripting language like PHP or Perl that has access to cURL libraries directly).
I've started down this road for some idempotent GET tasks like getting PDFs of the S&P reports (of the stocks I track) from my online brokerage, and downloading the check images for my bank account. Both tasks are repetitive and slow ways of downloading data to my computer that the financial institutions don't provide any way of making it easier.
Here's why you shouldn't: (as a shortcut I'm going to call the archetypal large bank, brokerage, or other financial institution "BloatBank")
BloatBank is not likely to make public their API for accessing this kind of information. So it can change any time and all your hard work will be for naught. Whenever they change their mechanism, you'll have to adapt.
If BloatBank finds out you've been using automatic scripting to try to access your account information, they may ban you because you've violated their terms of service.
You might screw up, and the interaction between the hodgepodge of scripts on BloatBank's server, and your scripts that access your account, might cause a Bad Thing like closing your account. Testing this kind of script is tremendously difficult because you don't have any documentation about how their online service works, and you don't have a test account you can mess with.
(a variant of the above) You think you're safe because you're issuing GET requests. But BloatBank is just a crazy bank that doesn't know anything about REST, so there are some GET requests that can mess up your account.
If someone else does use your script to maliciously sniff your online password or mess with your account, any liability coverage from BloatBank may disappear because you've opened a security hole.
Why don't you teach your wife how to login to the bank herself? Or use Quicken (or Mint, etc) and teach her how to use the auto-download feature?
Have you checked out Watir? It is fantastic for automating web-browser actions. And since it's written in Ruby, you can take the results and store them in a DB (or email them to yourself) if needed.
If you are open to AIR, I'd say build an AIR app. I have worked with mechanize and I think it's cool. AIR gives you similar features with a richer GUI (see HTMLLoader and DOM manipulation of webpage).
If I were you, I'd simply pull the page and manipulate the DOM to suit my visual needs.
Please, if you find this easy to do for your bank please post your bank's name. If I have the same one I'll be closing my account.
More to your question. The process of loading a web page inside of your code rather than in a browser can be a black art, especially if their is any javascript involved. Your best bet would probably be embedding the IE Web Browser control in your app and then simulating key strokes and mouse clicks to arrive at your balance page. Then scrape the HTML for the balance.
I could try paying for Quicken and letting it do the balance downloading. Then I'd just need to find a way to get the number out of the software automatically.
This way I'm not violating any terms of service and I'm also reducing security risk since all "hacking" goes on locally.