JSONP over HTTPS - ssl

Is it possible to send a jsonp-Request from domain http://www.a.com (not under my control) to domain www.b.com (under my control) through https? If so, are the parameter values in the GET-Request encrypted or do they be logged in access-logs in plain text?
I'm searching a secure way to do cross domain request. Unfortunately POST-Statements through CORS requests / SSL doesn't work with Internet Explorer. It doesn't support setting cookies by Access-Control-Allow-Credentials. Is there another way to achieve this goal?

For the second part of the question , HTTPS will only encrypt the channel the request uses to transfer the data. Once it arrives at the web server all the request params will be logged in your access log in plain text.
You would need to use a POST request to prevent the data being written to the access log. However you cant use JSONP over a POST request (not possible to send a POST request using a tag).

Related

Can I use vb.net to interrogate a website to know if it uses SSL

I have a program that asks the user to type in a URL, and click download. Then the program downloads the webpage.
However, some websites use SSL, and in that case the user has to prefix his URL with https:// for this to work.
The problem is that the user may not know whether the website uses SSL, and may type http://... instead of https://....
Is there some way to send a preliminary message to the website (from vb.net) asking whether the URL should start with https or just http? If there is, I can correct the user URL before attempting to retrieve the web page.
(I should say there it is not enough to use something like this:
request.RequestUri.Scheme - this looks at the URL the user submitted, not the URL coming back from the server, as far as I know)
For websites that uses SSL, usually they will force the request to use HTTPS. That is when you send a request in HTTP, for example, http://www.example.com, the website will send a redirect response with HTTP status code 302 as well as the URL the client side that initiate the request should redirect user to.
So, you can try HTTP first and check the response to see if there is a redirect. So, you will need to handle that in your code.

Changing request and response with an Apache Proxy Server

I want to use an Apache proxy server (mod_proxy) to intercept all requests and responses to a web server. However I want to change requests and responses before redirecting them. Simply rewriting URLs is easy and documented, but the changes I want to make are more sophisticated, namely they need to inspect the request for user credentials as well as conditionally make redirects.
Is this possible in Apache's mod_rewrite, possibly in combination with other modules?
While the main goal is to implement this in Apache, I would also be happy with an alternative solution which doesn't necessarily use Apache.
Here is a more precise explanation of what I want to achieve, to give a little more context:
Check each incoming request for user credentials. If credentials are present, they are replaced by the user information which the web server can use to identify the user (Ideally in the Authorization header)
For example, let's assume a request contains a cookie which authenticates the request as beeing sent from the user "John", this cookie is removed, and the Authorization header is changed to Authorization Authenticated_by_proxy {"id":12345,"name":"John"}
Check each answer to see if it's an Error 403. If this is the case and the user is not logged in, redirect the user to a login page instead of forwarding the error

Browsing to IP and port number displaying raw html 400 bad request

Setup HTTPS and a redirect from HTTP to HTTPS. Browsing to just the IP address with or without HTTP and HTTPS works great and redirects perfectly. But while browsing to X.X.X.X:443 the web server is displaying the 400 bad request in raw html. Can I either disable the 400 bad request or be able to redirect those requests to HTTPS? Please help. Thanks!
If such is possible, it would depend on which web server you are using and you didn't specify that. However...
Doing so would actually be a bad idea as it would encourage people to use HTTP (no S) to connect to your secure server. In doing so, they would send their request in plaintext. If the system just returned a "301 Moved Permanently" to the HTTPS url, the second request (with reply) would be protected but you still would have leaked the request to a potential attacker during the first attempt.

How do I create an query-string authorized s3 URL?

Apparently s3 supports urls in the form:
http://s3.amazonaws.com/bucket/file.txt?some_kind_of_auth_token
How do I generate a "secure" URL like this?
This is the official help which covers how to do this.
Look for the section called "Query String Request Authentication Alternative"
GET /photos/puppy.jpg?AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE&
Signature=rucSbH0yNEcP9oM2XNlouVI3BH4%3D&
Expires=1175139620 HTTP/1.1
Here's a snip from the help page.
You can authenticate certain types of requests by passing the required information as query-string parameters instead of using the Authorization HTTP header. This is useful for enabling direct third-party browser access to your private Amazon S3 data, without proxying the request. The idea is to construct a "pre-signed" request and encode it as a URL that an end-user's browser can retrieve. Additionally, you can limit a pre-signed request by specifying an expiration time.

HttpWebRequest cookie with empty domain

I have an ASP.NET MVC action that sends a GET request to another server via HttpWebRequest. I'd like to include all cookies in the original action's request in the new request. Some of the System.Web.HttpCookies in the original request have empty domain values (i.e. ""), which apparently doesn't cause any issues. When I create a System.Net.Cookie using the name, value, path, and domain of each of these cookies and add it to the request's CookieContainer, I get this error:
"System.ArgumentException: The parameter '{0}' cannot be an empty string. Parameter name: cookie.Domain"
Here's some code that will throw the same error (when the cookie is added):
var request = (HttpWebRequest)WebRequest.Create("http://www.whatever.com");
request.Method = "GET";
request.CookieContainer = new CookieContainer();
request.CookieContainer.Add ( new Cookie ( "MyCookieName", "MyCookieValue", "/", "") );
EDIT
I sort of fixed this by using "localhost" for the domain, instead of the null or empty string value from the original HttpCookie. So, why does an empty domain not work for the CookieContainer? And does HttpCookie use an empty value to signify localhost, or do I need to find another fix for this problem?
Disclaimer:
As stated earlier by #feroze, setting your cookies' domain to localhost is not going to work out so well for you. I'm assuming you're writing a helper that allows you to tunnel HTTP requests out to foreign domains. Note that this is not best practice and in a lot of cases is not needed (i.e. jQuery has a lot of cool cross-domain support built-in, also see the new CORS specification). But sometimes you may be stuck doing this (i.e. the external resource is XML only, and is on a server that doesn't support CORS).
Background Information on Cookie Domains and How They Work:
If you haven't already take a look at HTTP Cookie: Domain and Path on Wikipedia -- pretty much everything you need to know is in there.
When evaluating a cookie, the Domain and Path are taken into account by both the client (the "local" requester) and the web server (the "foreign" responder). When a client requests a resource, the client should only send cookies where those cookies match the Domain (or a more generic parent domain) and Path (or a more generic parent path) of the URI being requested.
Web browsers handle this correctly. If a web browser has a cookie for the domain "localhost" and you're requesting "google.com", for example, those cookies for the "localhost" domain won't be sent in the request to "google.com". -- In fact, most modern browsers won't just not send them, they'll completely ignore them in Set-Cookie response headers that they receive (these are called third-party cookies -- enabling the acceptance of third party cookies in your web browser is a huge privacy/security concern -- don't do it!).
It works in the other direction as well -- even though it's unlikely for a client to include a third party cookie in a request, if it does, the foreign web server is supposed to ignore it (and even some cookies for correct domains/paths, so as to prevent the infamous super-cookie issue. (i.e. The web server hosting "example.com" should ignore cookies belonging to its parent domain: ".com", because ".com" is a "public suffix")).
What You Should Do [if you have to]:
The course of action I recommend for you, is when you read in your client's cookies (I'm not an MVC guy, but in regular ASP.NET this would be in Request.Cookies), loop through them (make sure to filter out your own site's legitimate cookies, especially SessionId, etc -- or use Path properly so they never get sent to this page in the first place), then add them one at a time to the outgoing request's cookie collection, rewriting the domain to be "www.whatever.com" (per your example -- if you're doing this dynamically, load the URL into a new Uri() object and use the .Host property), and then set the Path to "/". -- This will build the "Cookie" header for the outgoing request to the foreign web server.
When that request returns to your server, you then need to check it's incoming response for new cookies -- those cookies can be repackaged and sent back down to your client in much the same type of loop as I illustrated in the previous paragraph, except you'll want to rewrite Host to be Request.Url.Host -- and you'll want to set path back to "/" unless the path to your passthru page is static (I'm guessing it isn't since you're using MVC) then you'd want to set it to Request.Url.AbsolutePath for instance.
Happy Coding!
EDIT:
Also, you'll want to set the X-Forwarded-For tag of the outgoing request, so that the website you're calling doesn't think your web server is one single client that's been spamming the crap out of them.
Not sure it solves your problem. But to add cookies without the "Domain" property you must add to the headers the cookies using HttpRequestHeader.Cookie as follows.
request.Headers.Add(HttpRequestHeader.Cookie, "Your cookies...");
Hope it helps!
Some background
This occurs because CookieContainer is client-side container designed to be reused across multiple HttpWebRequest. Reusing it provides the expected cookie behavior that cookies set by the remote host are sent back with every subsequent HttpWebRequests targeted at the same host.
As a result of the reuse, a CookieContainer might actually contain cookies from multiple request and\or hosts.
So, in order to determine which of the cookies in the container need to be sent with a particular HttpWebRequest to some host (domain), CookieContainer examines the Domain and the Path property.
That's why a Cookie in a CookieContainer needs to have a valid Domain.
Conversely, on the server-side cookies are delivered via a different type, CookieCollection which a simple list of cookies with no extra logic.
Specifically, in your case, while copying cookies from the CookieCollection to the CookieContainer you need to set the Domain property of every cookie to the domain your are going to forward the request to, so that HttpWebRequest will know to include the cookies while sending the request.
You are trying to get cookies sent to localhost, right?
Why don't you do something like this where you give your own machine a real name:
Edit your hosts file and add a line "127.0.0.1 myname.com"
Test using myname.com - which is actually your localhost.
Your browser or app will not know the difference and send cookies to myname.com if that is where the cookie belongs.
Detailed info:
The Hosts file on windows is located at C:\Windows\System32\drivers\etc\hosts