OWASP ZAP share Context between environments and change base URI's addresses - zap

I'm new to ZAP tool so sorry in advance if question is stupid, but I cannot find answer on it so far...
I have to fix all the vulnerabilities in some application, so I installed ZAP proxy tool locally, then explored application manually, collected all the requests and ran 'Active scanner' against it. So far everything is good, but the problem is that application quite big and it's very difficult and time consuming to cover everything manually. Fortunately we have dedicated automation environment where I can setup ZAP proxy and let test's go and populate context (set of url's for test) for me
So now my task is somehow share context's between different environments with ability to change base addresses
e.g. I populated context on somedomain/myapp and want run ZAP tool against same application deployed locally, or in different server (e.g. localhost/myapp)
It would be very helpful if someone could share any info how to achieve that.
Thank you in advance,
Eugene

It seems that you can create new context and then add existing links to that context.
Craete a new context
Add existing link to the selected content (right click)
Check this link.
https://chrisdecairos.ca/intercepting-traffic-with-zaproxy/
Tiago

Related

Is there a way to control local tool(CATIA/Office) by webbrowser?

Currently I am working on the development of CATIA by C++ and Automation Interface. Everything is based on local environment of every client machine. After updating our program, clients have to deploy it manually after receiving the updated one.
We are considering if there is a way we could put our program on the server, and we assign the authorizations to the specific clients to access it. They still need install CATIA in their local machine, but our customization programs are on the website.
Our program is based on COM component, so this is a priority.
Any feasible idea?
Thanks in advance.
I'm developing programs for Catia too (VB .NET) and there might be a solution to use webbrowser to manage the programs but I'm unable to help with that :)
Instead what I use is selfdeveloped feature which updates the tools exe files on client from network storage or FTP.
Think of it as algorithm which searches certain folders or storage and decides if the program should update itself and lets the user know. Then you run the updater which is not part of the tool (separate program) and he make the changes on the main exe (copy files, config, remove add etc.)
This way you don't have to take care of the deployment and the user only clicks the update. That's it :)

How to selenium test web sites depending on each other? (OAuth2 IdS, protected sites)

I have an IdS (Thinktecture Identity Server3) and various web sites trusting the IdS.
I have selenium tests for IdS and for each of the sites.
I use TeamCity and Octopus Deploy.
Changes in IdS should trigger test of dependent web sites. Changes in individual sites should trigger only test of the site (as it is).
What is the best way of ensuring this? I should think this is a common problem? ;)
BR, Anders
One way to do so, is to use App settings configuration options of the .Net itself. You can use config transformations to create different configuration per site and change. You will have to however map each, though. This will allow you to keep everything in the project. Example of such script creating transformed config files using the command line transform execution tool. Or if you prefer to use TeamCity with XML pokes. I've used the later with great success on Selenium and multi site platform test framework. Before each test build that was chained, we modified the XMLs, so the execution was dedicated to the related Git branch or repo that TeamCity was set to monitor.
I found what I was looking for in the most obvious of places. On the web site builds, I added a Finish Build Trigger pointing to the ids build. This way all my sites (I have only one :)) gets selenium tested.

Ektron really slow to startup on local host, how to improve this?

We're developing a solution which uses Ektron. As part of our solution we all have local IIS instances (localhost) and deploy to this local instance as part of the development life cycle.
The problem is that after a deployment and once dll's are replaced IIS restarts and the app pool is recycled, this means that Ektron dll's need to reload themselves.
This process takes an extended amount of time.
Is there anyway to improve the loading time of "Ektron"
To some extent, this is the nature of a large app running as a website rather than a web application. Removing the workarea from your local environment is one way to get this compile time down, though this will naturally not work depending on your workflow, for example if you are not using a separate dev DB or if you are storing the workarea in source control.
I have seen some attempts to pre-complile the workarea and keep the working code in a separate project (http://dev.ektron.com/forum.aspx?g=posts&t=10996) but this approach will only speed up your builds, not the recompilation of individual pages that will occur after a build as a result of running as a web site.
The last (and least best-practice) solution is to simply avoid making code changes that cause a recompile, like modifying app_code. Apps running as websites are perfectly happy to recompile a single page's codebehind without regenerating DLLs, which is advantageous for productivity but ultimately discourages good practices like reusing code in libraries. Keep in mind that this is terrible advice, but if you have a deadline and are staring at an ektron page loading every 30 minutes it can be useful to know.
Same problem here. I found this: http://brianpereras.blogspot.com/2013/06/ektron-85-86-workarea-is-slow-compared.html
That says that the help documentation was moved to be retrieved from an online source (documentation.ektron.com). We're running Ektron 9, and I just made this change and it seems much faster on first load (after iisreset).
The solution is to set documentation.ektron.com to 127.0.0.1 in your hosts file.
There is not, this is just how IIS works. Instead of running a local instance of Ektron it's a good idea just to point your web.config file to the database of your test database and copy the /workarea folder to your local PC. You can't edit ektron locally but you can change the data on your test server and it will show up locally.

How to recognize programmatically that application is installed vs development mode?

I'm trying to get information about license info of my app and MSDN docs (http://msdn.microsoft.com/en-us/library/windows/apps/hh694065.aspx) advice to use Windows.ApplicationModel.Store.CurrentAppSimulator class for that purposes during development/testing and when submitting app to store replace that class with Windows.ApplicationModel.Store.CurrentApp.
I wonder if there is any way to check in code (javascript in my case) if app is already installed from store so my code should use proper class and I won't have to remember every time I submit update of app to store to replacing those classes properly.
As far as I know, I could not find such thing. In fact, LicenseInfo is what provides information about the store listing.
I use a config.js file to keep settings at place which change between development and production. For example - if your app talks to a service, service URL also will likely change between development and production; the service might be running at localhost for development and for production in azure environment. I keep a bool in here and change by hand.
I have not automated it fully. but it is likely possible. need to dig through the msbuild logs for the build created for the store. if there is configuration setting found, then project can have two config.dev.js and config.release.js and msbuild need to conditionally pick the right file. I haven't looked into this yet.
I think I found at solution as described here WinJS are there #DEBUG or #RELEASE directives? . Not ideal, but works for me.

How do I go about safely taking a screenshot of a website that I know is infected with malware?

Background:
One of my clients' websites has become a malware infested hotbed.
Disposing of the malware has proven difficult and time consuming, and, in the meantime, we still have had to do work on the site.
For now, we went to some trouble to do our work - creating a disposable VM to just run a web browser, so we can see what the site looks like for the designers' work, for example.
I'm wondering if there's an easier (and faster) way to get an idea what the design of the site looks like. Not everyone on the project is tech savvy enough to be trusted with, for example, properly handling switching VMs.
Question:
Is there a method for safely seeing what a malware infested website looks like (for example, a service which will browse the site for me and send a screenshot), one which ideally is easy and simple enough to use that I can trust our non-tech-savvy designers to user?
You might take at look at Internet Archive: Wayback Machine to see if the site has been archived.
If a screenshot is all you need, there are several online browser simulators, such as Net Renderer (which will run any inputted web URL in a given version of Internet Explorer and then supply a screenshot). You might also try BrowserStack, which requires an account, and is not free, but does have a free trial period, and offers more than Internet Exploder.
You could also try running a browser in Sandboxie, which is simpler to set up and use than a VM (you just install it, and then use the windows right-click menu to launch any program in a sandbox of your choosing). However, it isn't free for commercial use.
I don't know if exist a standalone tool to parse a website for malwares, but I think this can help you, it's a google tool that you can you with a request and they will send you a response.
Follow the link:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=168328
Hope it helped.