I am using apache tomcat with mod_jk and running shindig on it. i am trying to pass the below url to it
http://download.finance.yahoo.com/d/quotes.csv?s=^GSPTSE+^SPCDNX+MIC.TO+ABX.TO+AEM.TO&f=snl1d1t1c1&e=.csv&random=5683
and it fails giving error 400 (Invalid url parameter)
if i pass the url without any parameter it works perfectly fine.
you can have look at consol log for below url
http://portaltab.com/shindig/gadgets/ifr?url=http://igstock.googlecode.com/svn/trunk/modules/canada_stock_market_on_ig.xml
i tried so many things, but no luck. i am not sure whether it is tomcat issue or something else.
if any expert experience the same issue could you please share some info.
Thank you.
Regards,
Raj
Most likely your issue is because carets (^) are not valid URL characters. They are considered "unsafe" per RFC1738. Quoting from that RFC:
...Other characters are unsafe because gateways and other transport
agents are known to sometimes modify such characters. These
characters are "{", "}", "|", "\", "^", "~", "[", "]", and "`".
You should encode the carets in your URL using %5E. Some programmers and libraries do not do this by default as it is not a commonly used symbol and some systems handle it without error even if not fully compliant.
It's not clear from your example if you are encoding your URL, and if so, where you are doing so. If not encoding at all, you might also need to encode the plus symbols. A fully encoded s value per your example would be:
%5EGSPTSE%2B%5ESPCDNX%2BMIC.TO%2BABX.TO%2BAEM.TO
Related
I'm using SoapUI to automate tests against my companies APIs. I've successfully setup and run dozens of these cases.
This tyme, I'm getting an error which, after exhaustive tracking down, I've found is due to our APIs requiring the "#" char itself rather than the URL friendly "%40" substitution.
The post request I want is structured like this:
https://<endpoint>.com/<resource>?<param>&email_address#example.com
And what I'm seeing made is:
https://<endpoint>.com/<resource>?<param>&email_address%40example.com
How can I enforce SoapUI to use the char itself?
I've tried setting headers, media type and representations (though possibly not through all permutations).
Thanks.
Use Disable Encoding for the parameter.
One of my clients has a PHP script that kept crashing inexplicably. After hours of research, I determined if you send any PHP script a variable (either through GET or POST) that contains " having t", or escaped for the URL "+having+t", it crashes the script and returns a "403 forbidden error". To test it, I made a sample script with the entire contents:
<?php echo "works";
I put it live (temporarily) here: http://primecarerefer.com/test/test.php
Now if you try sending it some data like: http://primecarerefer.com/test/test.php?x=+having+x
It fails. The last letter can be any letter and it will still crash, but changing any other letter makes the script load fine. What would cause this and how can it be fixed? The link is live for now if anyone wants to try out different combinations.
PS - I found that if I get the 403 error a bunch of times in a row, the sever blocks me for 15 minutes.
I had this type of issue on a webserver that ran apache mod_security, but it was very poorly configured, actually mod_security has very bad default regex rules, which are very easy to trip with valid POST or GET data.
To be clear, this has nothing to do with PHP or HTML, it's about POST and GET data passing through mod_security, almost certainly, and mod_security rejecting the request because it believes it is an sql injection attempt.
You can edit the rules yourself depending on the server access, but I don't believe you can do anything, well, if it's mod_security, I know you can't do anything via PHP to get around this.
/etc/httpd/conf.d/mod_security.conf (old path, it's changed, but it gives the idea)
Examples of the default rules:
SecFilter "delete[[:space:]]+from"
SecFilter "insert[[:space:]]+into"
SecFilter "select.+from"
These are samples of the rules
https://www.howtoforge.com/apache_mod_security
here they trip the filter:
http://primecarerefer.com/test/test.php?x=%20%22%20%20select%20from%22
Note that the article is very old and the rules actually are quite differently structured now, but the bad regex remains, ie: select[any number of characters, no matter how far removed, or close]from will trip it, any sql that matches these loose rules will trip it.
But since editing those default files requires access to them, and also assumes they won't be altered in an upgrade of apache mod_security at some point, it's not a good way to fix the problem I found, moving to a better, more professionally setup, hoster, fixed those issues for us. But it does help if you talk to the hosting support to know what the cause of the issue is.
In fact 'having' is not irrelevant at all, it's part of sql injection filters in the regex rules in the security filters run on POST/GET. We used to hit this all the time when admins would edit CMS pages, which would trigger invariably some sql filter, since any string of human words would invariably contain something like 'select.*from' or 'insert.*into' etc.
This mod_security issue used to drive me bonkers trying to debug why backend edit form updates would just hang, until I finally realized it was badly done generic regex patterns in the mod_security file itself.
In a sense, this isn't an answer, because the only fix is going into the server and either editing the rules file, which is pretty easy, or disabling mod_security, or moving to a web hoster that doesn't use those bad generic defaults.
Due to language adaptation I need to place some "special" chars in a custom header (chars like é, á, í, ç, and others)...
On the server side i'm using ASP.NET MVC.
It all works fine on chrome.
But in Safari... I can't figure out witch encoding safari uses...
I tried:
UTF-8,
UTF-16,
ASCII,
Url Encode,
a few ISO's
but alert(headerValue) always returns crazy chars...
can anyone tell me which encode to use?
There was a specification in the past regarding HTTP header encoding: RFC 2047. But it seems not to be implemented anymore and even removed.
Here are some related links:
What character encoding should I use for a HTTP header?
HTTP headers encoding/decoding in Java
https://bugzilla.mozilla.org/show_bug.cgi?id=601933
In your case, perhaps you could use URL-encoded string for the value of this custom header.
Hope it helps you,
Thierry
I have GET request being sent to weblogic 12c server, which carries user info and on the server side i grab these
info to process the request.
And the GET request looks like below:
URL:/prem/JSP/xml/prems.jsp?username=rjanga&password=1234roh#&address=3450Rivast&city=FT+COLLINS&state=CO&zip=80526.
since i have a '#' symbol in my password weblogic server is ignoring it and anything after
it.
it only sees url as
/prem/JSP/xml/prems.jsp?username=rjanga&password=1234roh (ignoring the symbol '#' and all strings after it like address, city..etc)
After doing some research and going through this link i tried solution mention in it.
but it did not help, any help is appreciated.
You're going to have to do the encoding on the password field. Putting the password in the URL is HORRIBLY insecure. You shouldn't be doing it. That said, here is some info:
From http://java.sun.com/j2se/1.5.0/docs/api/java/net/URL.html :
The URL class does not itself encode or decode any URL components according to the escaping mechanism defined in RFC2396. It is the responsibility of the caller to encode any fields, which need to be escaped prior to calling URL, and also to decode any escaped fields, that are returned from URL. Furthermore, because URL has no knowledge of URL escaping, it does not recognise equivalence between the encoded or decoded form of the same URL. For example, the two URLs:
http://foo.com/hello world/
and
http://foo.com/hello%20world
would be considered not equal to each other.
Note, the URI class does perform escaping of its component fields in certain circumstances. The recommended way to manage the encoding and decoding of URLs is to use URI, and to convert between these two classes using toURI() and URI.toURL().
It will be up to you to encode and decode those URL strings.
My Web host has refused to help me with this, so I'm coming to the wise folks here for some help "black-box debugging". Here's an edited version of what I sent to them:
I have two (among other) domains at dreamhost:
1) thefigtrees.net
2) shouldivoteformccain.com
I noticed today that when I host a CGI script on #1, that by the time the
CGI script runs, the HTTP GET query string passed to it as the QUERY_STRING
environment variable has already been URL decoded. This is a problem because
it then means that a standard CGI library (such as perl's CGI.pm) will try to
split on ampersands and then decode the string itself. There are two
potential problems with this:
1) the string is doubly-decoded, so if a value is submitted to the script
such as "%2525", it will end up being treated as just "%" (decoded twice)
rather than "%25" (decoded once)
2) (more common) if there is an ampersand in a value submitted, then it
will get (properly) submitted as %26, but the QUERY_STRING env. variable will
have it already decoded into an "&" and then the CGI library will improperly
split the query string at that ampersand. This is a big problem!
The script at http://thefigtrees.net/test.cgi demonstrates this. It echoes back the
environment variables it is called with. Navigating in a browser to:
http://thefigtrees.net/lee/test.cgi?x=y%26z
You can see that REQUEST_URI properly contains x=y%26z (unencoded) but that
QUERY_STRING already has it decoded to x=y&z.
If I repeat the test at domain #2 (
http://www.shouldivoteformccain.com/test.cgi?x=y%26z ) I see that the
QUERY_STRING remains undecoded, so that CGI.pm then splits and decodes
correctly.
I tried disabling my .htaccess files on both to make sure that was not the
problem, and saw no difference.
Could anyone speculate on potential causes of this, since my Web host seems unwilling to help me?
thanks,
Lee
I have the same behavior in Apache.
I believe mod_rewrite will automatically decode the URL if it is installed, however, I have seen the auto-decode behavior even without it. I haven't tracked down the other culprit.
A common workaround is to double encode the input parameter (taking advantage of URL decoding being safe when called on an unencoded URL).
Curious. Nothing I can see from here would give us a clue why this would happen... I can only confirm that it is an environment bug and suspect maybe configuration differences like maybe rewrite rules.
Per CGI 1.1, this decoding should only happen to SCRIPT-NAME and PATH-INFO, not QUERY-STRING. It's pointless and annoying that it happens at all, but that's the spec. Using REQUEST-URI instead of those variables where available (ie. Apache) is a common workaround for places where you want to put out-of-bounds and Unicode characters in path parts, so it might be reasonable to do the same for query strings until some sort of resolution is available from the host.
VPSs are cheap these days...