How to config upload body size restriction in traefik? - traefik

I'm using traefik docker image v1.5, and when I try to upload a big file (like 1GB), traefik closes the connection and logs:
exceeding the max size 4194304
So is there any way to modify/remove this default restriction?

Traefik uses a Buffering middleware that gives you control on how you want to read the requests before sending them to services.
With Buffering, Traefik reads the entire request into memory (possibly buffering large requests into disk), and rejects requests that are over a specified limit.
This can help services deal with large data (multipart/form-data for example), and can minimize time spent sending data to a service.
Example configuration:
[backends]
[backends.backend1]
[backends.backend1.buffering]
maxRequestBodyBytes = 10485760
memRequestBodyBytes = 2097152
maxResponseBodyBytes = 10485760
memResponseBodyBytes = 2097152
retryExpression = "IsNetworkError() && Attempts() <= 2"
With the maxRequestBodyBytes option, you can configure the maximum allowed body size for the request (in Bytes).
If the request exceeds the allowed size, it is not forwarded to the service and the client gets a 413 (Request Entity Too Large) response.
Traefik Buffering Documentation

For me, the problems is not with V2 of Traefik, but with Cloudflare. Cloudflare limits max uploads to 100Mb on free accounts.
I had to upload with a internal URL to get around this issue.

Related

Weblogic 12.2.1 managed server access.log not updating

I have developed some JAX-RS web services and deployed the WAR file to a managed server on WebLogic 12.2.1. When I call a web service, either through a client program, or via web browser, I noticed that nothing is getting updated in E:\MLM\MyDomain\servers\MyAppSrv01\logs\access.log. This file stays empty all the time. When the next day comes (at 12.00am), the file will roll over to access.logNNNNN (e.g. access.log00004) and then I can see some of the GET and POST calls of the previous day appearing in access.logNNNNN. The strange thing is that only some of the web service calls appear in access.logNNNNN, even though I make many calls throughout the testing. What could be the problem?
Thanks in advance.
You are not seeing access logs at Run Time due to Buffer Size defined. To reduce I/O Weblogic will write logs to Buffer first and when the limit reaches it will write to access.log file.
Log Buffer Size
The maximum size (in kilobytes) of the buffer that stores HTTP requests. When the buffer reaches this size, the server writes the data to the HTTP log file. Use the LogFileFlushSecs property to determine the frequency with which the server checks the size of the buffer.
You can set this value to 0 for run-time logging.

Difference in response time between http vs https

I tested my web site with 100 users with http and https. The response time obtained in https is much higher compared to the response time obtained in http. The response time of https is nearly four times greater than http. Can anyone explain me why the response time is higher in https compared to http? or do i need to change any SSL property in jmeter system.properties? Thanks in Advance..!
SSL Handshake assumes 4 requests for establishing a connection so first request should be something like 4x times longer than in case of HTTP. See The SSL handshake diagram for more info
However if you receive 4 times performance degradation for all requests - that doesn't sound right.
There are following JMeter properties which control SSL flows:
https.sessioncontext.shared - controls whether SSL session contexts are created per thread (if it's set to false) or shared (if it's set to true)
https.use.cached.ssl.context - controls if cached SSL context is being reused between iterations
These properties live in jmeter.properties file under /bin folder of your JMeter installation. It's also possible to override them using -J command line key as follows:
jmeter -Jhttps.sessioncontext.shared=true -Jhttps.use.cached.ssl.context=true
See Apache JMeter Properties Customization Guide for more details.
If above setting won't help you'll need to review your test plan and perhaps profile application to see where this extra time is spent.

lighttpd: disable CGI buffering

Is there a way to stop lighttpd from buffering POSTs to a CGI executable?
It seems to me that all requests are fully buffered on disk before they are forwarded to the CGI executable, which makes it impossible for me to process the input in a stream-based way.
To clarify, I'm only talking about the request that is forwarded to the CGI executable on the standard input; I've already verified that the response is not buffered like that, and streaming output is indeed possible.
server.stream-request-body = 0 (default) buffer entire request body before connecting to backend
server.stream-request-body = 1 stream request body to backend; buffer to temp files
server.stream-request-body = 2 stream request body to backend; minimal buffering might block upload
when using HTTPS, it is recommended to additionally set ssl.read-ahead = "disable"
https://redmine.lighttpd.net/projects/lighttpd/wiki/Server_stream-request-bodyDetails

Background Intelligent Transfer Service and Amazon S3

I'm using SharpBITS to download file from AmazonS3.
> // Create new download job. BitsJob
> job = this._bitsManager.CreateJob(jobName, JobType.Download);
> // Add file to job.
> job.AddFile(downloadFile.RemoteUrl, downloadFile.LocalDestination);
> // Resume
> job.Resume();
It works for files which do no need authentication. However as soon as I add authentication query string for AmazonS3 file request the response from server is http state 403 -unauthorized. Url works file in browser.
Here is the HTTP request from BIT service:
HEAD /mybucket/6a66aeba-0acf-11df-aff6-7d44dc82f95a-000001/5809b987-0f65-11df-9942-f2c504c2c389/v10/summary.doc?AWSAccessKeyId=AAAAZ5SQ76RPQQAAAAA&Expires=1265489615&Signature=VboaRsOCMWWO7VparK3Z0SWE%2FiQ%3D HTTP/1.1
Accept: */*
Accept-Encoding: identity
User-Agent: Microsoft BITS/7.5
Connection: Keep-Alive
Host: s3.amazonaws.com
The only difference between the one from a web browser is the request type. Firefox makes a GET request and BITS makes a HEAD request. Are there any issues with Amazon S3 HEAD requests and query string authentication?
Regards, Blaz
You are probably right that a proxy is the only way around this. BITS uses the HEAD request to get a content length and decide whether or not it wants to chunk the file download. It then does the GET request to actually retrieve the file - sometimes as a whole if the file is small enough, otherwise with range headers.
If you can use a proxy or some other trick to give it any kind of response to the HEAD request, it should get unstuck. Even if the HEAD request is faked with a fictitious content length, BITS will move on to a GET. You may see duplicate GET requests in a case like this, because if the first GET request returns a content length longer than the original HEAD request, BITS may decide "oh crap, I better chunk this after all."
Given that, I'm kind of surprised it's not smart enough to recover from a 403 error on the HEAD request and still move on to the GET. What is the actual behaviour of the job? Have you tried watching it with bitsadmin /monitor? If the job is sitting in a transient error state, it may do that for around 20 mins and then ultimately recover.
Before beginning a download, BITS sends an HTTP HEAD request to the server in order to figure out the remote file's size, timestamp, etc. This is especially important for BranchCache-based BITS transfers and is the reason why server-side HTTP HEAD support is listed as an HTTP requirement for BITS downloads.
That being said, BITS bypasses the HTTP HEAD request phase, issuing an HTTP GET request right away, if either of the following conditions is true:
The BITS job is configured with the BITS_JOB_PROPERTY_DYNAMIC_CONTENT flag.
BranchCache is disabled AND the BITS job contains a single file.
Workaround (1) is the most appropriate, since it doesn't affect other BITS transfers in the system.
For workaround (2), BranchCache can be disabled through BITS' DisableBranchCache group policy. You'll need to do "gpupdate" from an elevated command prompt after making any Group Policy changes, or it will take ~90 minutes for the changes to take effect.

HTTP Session Timeout with WebLogic

How do I find the HTTP timeout set on the WebLogic 8.1 application server?
I only have Weblogic 9 and 10 available but on those platforms, you can go to the console, click on the name of your domain, then (in "Configuration" tab) "Web Applications". There you will have 3 parameters:
Post Timeout: The amount of time this server waits between receiving chunks of data in an HTTP POST data before it times out. (This is used to prevent denial-of-service attacks that attempt to overload the server with POST data.)
Maximum Post Time: Max Post Time (in seconds) for reading HTTP POST data in a servlet request. MaxPostTime < 0 means unlimited
Maximum Post Size: The maximum post size this server allows for reading HTTP POST data in a servlet request. A value less than 0 indicates an unlimited size.
However, there might be other parameters involved depending on what your problem exactly is.