Why is chunked transfer encoding not allowed to web applications running in CloudBees run#cloud? - cloudbees

I'm using an application that sends SOAP requests (HTTP POST) to my application running in CloudBees PaaS (run#cloud). The SOAP sender application gets the following error from the server: Transport error: 411 Error: Length Required. This means that it should not use chunked transfer encoding, because chunked doesn't send Content-length header which the server seems to want.
Is there some reason why chunked cannot be used? I'm aware that some web servers, like Apache, might have DOS vulnerabilities related to chunked transfer encoding. Is this the reason? Or is it because run#cloud uses Nginx as a proxy?

You can now set the httpVersion end to end for your app:
To enable: httpVersion=1.1
for example, this is how WebSocket works:
https://developer.cloudbees.com/bin/view/RUN/WebSockets
You can and should also set proxyBuffering=false - (this is default for new apps).

Cloudbees Nginx router indeed uses http 1.0 so don't have chunked transfert support. As we are working on websocket support, with a new version of NGinx, this may be available soon

Related

how can I use http2 protocol in Vue?

Vue recommends that using Axios for HTTP requests. As I know, Axios use http1.1 protocol, but I want to use http2.0, how should I do? h2 has been a build-in module in nodejs at server-side, so I need h2 in vue as a client.
The HTTP/2 connection should be transparent for your browser application. You just need to make sure your server and browser support HTTP/2
When both of your server and browser support HTTP/2 your browser's XHR will use HTTP/2. You don't need to do any special setup in Vue.js
For Axios, HTTP/2 support problem only happens in the server-side. Because it is using the following adapter which calls Node.js' HTTP and HTTPS module.
https://github.com/axios/axios/blob/master/lib/adapters/http.js
There is already a pull-request for HTTP/2 support. You can try it if you want Axios HTTP/2 support in server-side
But in client-side it will use browser API - XMLHttpRequest which will follow the browser behaviour
https://github.com/axios/axios/blob/master/lib/adapters/xhr.js

Apache Axis 1.4: Calling a SOAP API on an https server through an http proxy

The problem is as follows:
We have a SOAP API running behind TLS1.2 and SNI
Our main software is stuck on JDK6 where it is basically impossible to connect to a server using SNI
We need to use Axis 1.4 for SOAP calls
We have set up a simple Apache Proxy rerouting calls to http://proxyIP/foo to https://mainIP/
The proxy works like a charm when tested manually or in a browser.
However, using Axis to do the required SOAP calls fails with an Exception:
Unrecognized SSL message, plaintext connection?
What could cause this and how could we fix this?
Every idea is appreciated.

Unable to send web requests to remote server

I receive the following error when I attempt to make a web request to a recently deployed remote server that's shared and running IIS:
SEC7120: Origin http://localhost:8000 not found in
Access-Control-Allow-Origin header.
HTTP404: NOT FOUND - The server has not found anything matching the
requested URI (Uniform Resource Identifier). (XHR)GET -
http://mywebsite/someservice/somevalue
All of this works fine when I run the web server on my local machine.
Any suggestions?
You'll need to set up CORS handling on the server so that browsers will allow cross-origin javascript requests. See Enabling Cross-Origin Requests (CORS) for instructions on how to do this for .Net Core apps.

WCF Compression with .net 4

I have read this WCF Compression article
I understand that for .net 4.0 WCF compression is available out of the box.
I can't find any clear explanation of how to use it, do I need to define any settings or change a binding? or it is compressed automatically?
I am using basicHttpBinding in IIS7. The option "enable dynamic compression" is set to true, but I don't get how the client knows to compress the requests and to decompress the response?
Any explanation including setting the binding to reduce message size will be appreciated. I am experiencing very bad performance when working on remote server with 4MB bandwidth.
but I don't get how the client knows to compress the requests and to decompress the response??
It's all part of the HTTP spec. Since WCF uses HTTP & IIS, it can leverage the built-in compression of the web server and client HTTP stack.
Check out section 14.3:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
Basically, your client needs to send a header saying it supports compression. Example: Accept-Encoding: gzip, deflate. You can set this by following the instructions in your article for the WCF Client section. Your client will then send out the right headers to the server.
Now on the server side, IIS will see that header, and it will compress the response...if configured to do so. The article you linked tells you how to set up IIS for compression for WCF services. The server will then send back a header to the client telling it the content is compressed: Content-Encoding: gzip. The client will then decompress the response and go on its merry way.
That's pretty much it; it's just a matter of getting the client headers right and the server configured to send back a compressed response. The article tells you how to do just that. Hope that helps
Note that compression has been added to WCF 4.5. It's covered here: http://msdn.microsoft.com/en-us/library/aa751889(v=vs.110).aspx
You have to use a custom binding to enable it:
<customBinding>
<binding name="BinaryCompressionBinding">
<binaryMessageEncoding compressionFormat="GZip"/>
<httpTransport />
</binding>
</customBinding>
It only works with binary encoding. Also, you have to be aware of your scenario. If you are hosted in IIS, compression may already be on. See here: http://blogs.msdn.com/b/dmetzgar/archive/2011/04/29/automatic-decompression-in-wcf.aspx
The compression sample is provided in .NET 4 WCF sample,
http://www.microsoft.com/en-us/download/details.aspx?id=21459
This blog post explains it with more information,
http://blogs.msdn.com/b/dmetzgar/archive/2011/03/10/compressing-messages-in-wcf-part-one-fixing-the-gzipmessageencoder-bug.aspx
There are other posts on MSDN blogs, like
http://blogs.msdn.com/b/dmetzgar/archive/2011/04/29/automatic-decompression-in-wcf.aspx
When using an http encoding, a good way to enable the compression of responses (only) is by using the dynamic compression built-in to IIS7 and higher.
but I don't get how the client knows to compress the requests and to decompress the response?
What follows is a description of what HTTP offers out of the box, which can be used together with the WCF HTTP(S) encodings. In addition to that, WCF 4.5 provides gzip and deflate compression of its binary encoding.
Compressed responses are part of the HTTP standard. In its request, the client signals to the server which compression methods (gzip, deflate, ...) it supports by means of the following header:
Accept-Encoding: gzip, deflate
The server, in its sole discretion, infinite wisdom and mysterious ways, is free to ignore that header and send the response uncompressed, or it may choose any one of the algorithms offered by the client, answering, say, with the following header and compress the response body.
Content-Encoding: gzip
To make matters more complicated, the server will probably also set the following header:
Transfer-Encoding: chunked
This allows the server to omit the otherwise mandatory Content-Length header that, as HTTP headers in general, has to precede the HTTP body. (Setting the chunked encoding affects the way the body gets encoded.) So now it can compress the response body on-the-fly, i.e. spitting out bytes as they are compressed, without having to wait for the compression of the whole body to finish, just to be able to determine the content length of the compressed result. This can save a lot of memory on the server side. (The client side, however, is now left in the dark as to the total size of the compressed response, until it finished receiving the whole response, making it's decompression slightly less efficient)
Note however that using Accept-Encoding and Content-Encoding, as I just described, to transparently compress responses was actually a stupid idea, according to http co-author Roy Fielding and what should have been used instead is the following header in the request:
TE: gzip, deflate
And the server, if it chooses to perform compression, would add the following header to its response:
Transfer-Encoding: gzip, chunked
As before, chunked is necessary if the server wants to omit the Content-Length.
Otherwise, the TE/Transfer-Encoding combo is syntactically identical to the Accept-Encoding/Content-Encoding combo, but the meaning is different, as can be gleaned from this longish discussion.
The gist of the problem: TE/Transfer-Encoding makes compression a transportation detail, whereas Accept-Encoding/Content-Encoding denotes the compressed version as the actual data (entity in HTTP parlance), with a number of unfortunate ramifications regarding caching of requests, proxy-ing, etc. of the latter.
However, the TE/Transfer-Encoding ship sailed long time ago, and we are stuck with the AE/CE combo, which is supported by most clients and servers with a meaning that is in actuality closer to that of TE/TE
When it comes to compressed requests in HTTP, they are rarely used in practice, and there is no standard way for the client to figure out if a server supports it. Either you tell the client out-of-band (e.g. hard-coding) that the server understands compressed requests (and configure the server appropriately). Or you have your client proactively try compression once, and if it yields a 400 Bad Request (at least that's what an IIS 7.5 would return), you fall back to non-compressed requests.

Authorization header missing when calling WCF web service on IIS 7

We have a WCF web service that takes a "username:password" Authorization header. This service works fine on my development Windows 7 machine and on a Windows Server 2003 machine in production.
However, our development and new production servers are Windows Server 2008 and the service fails to receive the Authorization header.
You can see the raw request from Fiddler below includes the Authorization header.
POST http://servername/service.svc/soap HTTP/1.1
Content-Type: text/xml; charset=utf-8
Authorization: test:test
VsDebuggerCausalityData: uIDPozpQ7mbpOVRFu79Tl0h3mkIAAAAAJMavDzJlIkqyjJDSIIxdVuKNB0y6n29OvukFtyRt0wwACQAA
SOAPAction: "..."
Host: servername
Content-Length: 152
Expect: 100-continue
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
I haven't successfully enabled IIS Advanced Logging to see if I can get any extra information on the server.
The IIS website is configured for anonymous authentication.
The service implements IAuthorizationPolicy and the Evaluate method is definitely being called, which is where the authorization information is missing.
It feels like I've misconfigured something in IIS, but I have no idea what! Any help would be much appreciated.
It turns out I was dealing with two different problems.
On the development server, I found that the .NET 4.0 installation was broken. This was diagnosed by this brilliant tool.
On both servers, I changed the pipeline mode from Classic to Integrated, as nobody knew why it was set to Classic in the first place. A few tweaks the web config later, and the service was working!
The solution may also have required Basic Authentication, as suggested by Richard L, since that was missing from the production server. Note that Basic Authentication is not enabled on the service web site itself.
A very frustrating few days trying to get this working.