Intercepting with XMLHttpRequest for a specific address using greasemonkey - xmlhttprequest

I'm trying to write a greasemonkey script that will work on either Chrome and Firefox.. a script that will block XMLHttpRequest to a certain hard-coded url..
I am kind of new to this area and would appreciate some help.
thanks.

it possible now using
#run-at document-start
http://wiki.greasespot.net/Metadata_Block#.40run-at
but it need more improvement, check the example
http://userscripts-mirror.org/scripts/show/125936

This almost impossible to do with Greasemonkey. It is the wrong tool for the job. Here's what to use, most effective first:
Set your hardware firewall, or router, to block the URL.
Set your software firewall to block the URL.
Use Adblock to block the URL.
Write a convoluted userscript that tries to block requests from one set of pages to a specific URL. Note that this potentially has to block inline src requests as well as AJAX, etc.

Related

ZAP automatically returning same response on breakpoint

I'm using OWASP ZAP as a proxy tool for testing mobile applications.
What I'm trying to do is make a breakpoint on some URL, and return custom response to test applications UI or functionality.
Currently, whenever breakpoint is triggered, I have to manually let request pass, and then change the response and let that one pass in order to see the change in the app. And when I have to do it multiple times, it's not really convenient.
Is it possible to make a breakpoint on a URL that will return some predefined response every time it is triggered?
If it's not possible, are you aware of any other tool that is?
Yes you can do that, but not with breakpoints - they are manual only.
Instead you can either use either :
Replacer
Scripts
The replacer is easier to set up but more restricted while scripts can do absolutely anything. There are example scripts for replacing scripts in response headers and bodies.

How to force dispatcher cache urls with get parameters

As I understood after reading these links:
How to find out what does dispatcher cache?
http://docs.adobe.com/docs/en/dispatcher.html
The Dispatcher always requests the document directly from the AEM instance in the following cases:
If the HTTP method is not GET. Other common methods are POST for form data and HEAD for the HTTP header.
If the request URI contains a question mark "?". This usually indicates a dynamic page, such as a search result, which does not need to be cached.
The file extension is missing. The web server needs the extension to determine the document type (the MIME-type).
The authentication header is set (this can be configured)
But I want to cache url with parameters.
If I once request myUrl/?p1=1&p2=2&p3=3
then next request to myUrl/?p1=1&p2=2&p3=3 must be served from dispatcher cache, but myUrl/?p1=1&p2=2&p3=3&newParam=newValue should served by CQ for the first time and from dispatcher cache for subsequent requests.
I think the config /ignoreUrlParams is what you are looking for. It can be used to white list the query parameters which are used to determine whether a page is cached / delivered from cache or not.
Check http://docs.adobe.com/docs/en/dispatcher/disp-config.html#Ignoring%20URL%20Parameters for details.
It's not possible to cache the requests that contain query string. Such calls are considered dynamic therefore it should not be expected to cache them.
On the other hand, if you are certain that such request should be cached cause your application/feature is query driven you can work on it this way.
Add Apache rewrite rule that will move the query string of given parameter to selector
(optional) Add a CQ filter that will recognize the selector and move it back to query string
The selector can be constructed in a way: key_value but that puts some constraints on what could be passed here.
You can do this with Apache rewrites BUT it would not be ideal practice. You'll be breaking the pattern that AEM uses.
Instead, use selectors and extensions. E.g. instead of server.com/mypage.html?somevalue=true, use:
server.com/mypage.myvalue-true.html
Most things you will need to do that would ever get cached will work this way just fine. If you give me more details about your requirements and what you are trying to achieve, I can help you perfect the solution.

RESTlet redirect sending browser riap URI

I'm using RESTlet to handle PUT requests from a browser and after a successful PUT, I want to redirect the browser to different web page.
Seems like a standard PUT->REDIRECT->GET to me, but I'm not figuring out how to do it in my RESTlet resource.
Here is my code after the PUT has done the requested work:
getResponse().redirectSeeOther("/account");
However that results in the browser getting:
Response Headers
Location riap://application/account
Of course, "riap" protocol is meaningless to the browser and "application" is not a server name. It seems like there ought to be a way to send a redirect back to the browser without building the entire URL in my redirectSeeOther() call. Building the URL seems like to could be error prone.
Is there an easy way to redirect without building the whole URL from the ground up?
Thanks!
Sincerely,
Stephen McCants
Although I am not 100% sure in what type of class you are trying to do this.
Try :
Reference reference = getRequest().getRootRef().clone().addSegment("account");
redirectSeeOther(reference);
I usually also then set the body as
return new ReferenceList(Arrays.asList(reference)).getTextRepresentation();
but that may not be necessary for all clients, or at all. I will usually use this style in a class that extends ServerResource - Restlet (2.0.x or 2.1.x).

returning absolute vs relative URIs in REST API

suppose the DogManagementPro program is an application written in client/server architecture, where the customers who buys it is supposed to run the server on his own PC, and access it either locally or remotely.
suppose I want to support a "list all dogs" operations in the DogManagementPro REST API.
so a GET to http://localhost/DogManagerPro/api/dogs should fetch the following response now:
<dogs>
<dog>http://localhost/DogManagerPro/api/dogs/ralf</dog>
<dog>http://localhost/DogManagerPro/api/dogs/sparky</dog>
</dogs>
where I want to access it remotely on my local LAN, [the local IP of my machine is 192.168.0.33]
what should a a GET to http://192.168.0.33:1234/DogManagerPro/api/dogs fetch?
should it be:
<dogs>
<dog>http://localhost/DogManagerPro/api/dogs/ralf</dog>
<dog>http://localhost/DogManagerPro/api/dogs/sparky</dog>
</dogs>
or perhaps:
<dogs>
<dog>http://192.168.0.33/DogManagerPro/api/dogs/ralf</dog>
<dog>http://192.168.0.33/DogManagerPro/api/dogs/sparky</dog>
</dogs>
?
some people argue that I should subside the problem altogether by returning just a path element like so:
<dogs>
<dog>/DogManagerPro/api/dogs/ralf</dog>
<dog>/DogManagerPro/api/dogs/sparky</dog>
</dogs>
what is the best way?
I've personally always used non-absolute urls. It solves a few other problems as well, such as reverse / caching proxies.
It's a bit more complicated for the client though, and if they want to store the document as-is, it may imply they also now need to store the base url, or expand the inner urls.
If you do choose to go for the full-url route, I would not recommend using HTTP_HOST, but setup multiple vhosts, and environment variable and use that.
This solves the issue if you later on need proxies in front of your origin server.
I would say absolute URLs created based on the Host header that the client sent
<dogs>
<dog>http://192.168.0.33:1234/DogManagerPro/api/dogs/ralf</dog>
<dog>http://192.168.0.33:1234/DogManagerPro/api/dogs/sparky</dog>
</dogs>
The returned URIs should be something the client is able to resolve.

Double request from mod-rewrite

I've written a module that sets Apache environment variables to be used by mod-rewrite. It hooks into ap_hook_post_read_request() and that works fine, but if mod-rewrite matches a RewriteRule then it makes a second call to my request handler with the rewritten URL. This looks like a new request to me in that the environment variables are no longer set and therefore I have to execute my (expensive) code twice for each hit.
What am I doing wrong, or is there a work around for this?
Thanks
You can use the [NS] modifier on a rule to cause it to not be processed for internal subrequests (the second pass you're seeing is an internal subrequest).
As I understand it, the NS flag (suggested in another answer) on a rule makes it evaluate as "if I am being called a second time, ignore me". The trouble is, by then it's too late since the hook has already been called. I believe this will be a problem no matter what you do in mod_rewrite. You can detect the second request, but I don't know of any way to prevent the second request.
My best suggestion is to put the detection in your handler before your (expensive) code and exit if it's being run a second time. You could have mod_rewrite append something to the URL so you'd know when it's being called a second time.
However...
If your (expensive) code is being called on every request, it's also being called on images, css files, favicons, etc. Do you really want that? Or is that possibly what you are seeing as the second call?
Thanks a bunch, I did something similar to what bmb suggested and it works! But rather than involving mod-rewrite in this at all, I added a "fake" request header in my module's request handler, like so:
apr_table_set(r->headers_in, "HTTP_MY_MODULE", "yes");
Then I could detect it at the top of my handler on the second rewritten request. Turns out that even though mod-rewrite (or Apache?) doesn't preserve added env or notes variables (r->subprocess_env, r->notes) in a subrequest, it does preserve added headers.
As for my expensive code getting called on every request, I have a configurable URL suffix/extension filter in the handler to ignore image, etc requests.