HTTP Session Timeout with WebLogic - weblogic

How do I find the HTTP timeout set on the WebLogic 8.1 application server?

I only have Weblogic 9 and 10 available but on those platforms, you can go to the console, click on the name of your domain, then (in "Configuration" tab) "Web Applications". There you will have 3 parameters:
Post Timeout: The amount of time this server waits between receiving chunks of data in an HTTP POST data before it times out. (This is used to prevent denial-of-service attacks that attempt to overload the server with POST data.)
Maximum Post Time: Max Post Time (in seconds) for reading HTTP POST data in a servlet request. MaxPostTime < 0 means unlimited
Maximum Post Size: The maximum post size this server allows for reading HTTP POST data in a servlet request. A value less than 0 indicates an unlimited size.
However, there might be other parameters involved depending on what your problem exactly is.

Related

Advanced-rest-client: How can I increase the request timeout while using the advanced-rest-client?

I am calling a rest controller on my localhost, passing in bunch of request headers. The call is made fine but due to processing timeout the advanced-rest-client timesout and doesn't show the result from the server. how can i increase the timeout so it can wait for server to complete the request and display the result? I would appreciate your hints on this.
Thanks,
Sam
You can adjust request timeout in application settings. In version 14 it is possible to adjust request timeout directly in the request editor.

Apache Request/Response Too Slow

I have an Apache server with 16GB of Ram. The script cap.php returns a very small chunk of data (500B). It starts a mysql connection and makes a simple query.
However, the response from the server is, in my opinion, too lengthy.
I attach a screenshot of the Developer Tool Panel in Chrome.
Beside SSL and the TTFB there is a strange delay of 300ms (Stalled).
If I try a curl from the WebServer:
curl -w '\nLookup time:\t%{time_namelookup}\nConnect time:\t%{time_connect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' -k -H 'miyazaki' https://127.0.0.1/ui/cap.php
Lookup time: 0.000
Connect time: 0.000
PreXfer time: 0.182
StartXfer time: 0.266
Total time: 0.266
Does anyone know what that is?
Eventually, I found that if you use SSL it is really better and it does really matter to switch on the KeepAlive directive into Apache. See the picture below.
According to the Chrome documentation:
Stalled/Blocking
Time the request spent waiting before it could be sent. This time is inclusive of any time spent in proxy negotiation.
Additionally, this time will include when the browser is waiting for
an already established connection to become available for re-use,
obeying Chrome's maximum six TCP connection per origin rule.
So this appears to be a client issue with Chrome talking to the network rather than a server config issue. As you are only making one request I think we can rule out the TCP limit per origin (unless you have lots of other tabs using up these connections) so would guess either limitations on your PC (network card, RAM, CPU) or infrastructure issues (e.g. You connect via a proxy and it takes time to set up that connection).
Your curl request doesn't seem to show this delay as it has just a 0.182 wait time to send the request (which is easily explained with https negotiation) and then a 0.266 total time to download (including the 0.182). This compares with 0.700 seconds when using Chrome so don't understand why you say "total time is similar" when to me it's clearly not?
Finally I do not understand your follow up answer. It looks to me like you have made request, presumably after a recent other request as this has skipped the whole network connection stage (including any grey stalling, blue DNS lookup, orange initial connection and purple https connection). So of course this quicker. But it's not comparing like for like with your first screenshot in your question and is not addressing your question.
But yes you absolutely should be using keep-alives (they are on by default in most web server so usually takes extra efforts to turn them off) and https resumption techniques (not on by default unless you explicitly add this to your https config) to benefit any additional requests sent shortly after the first. But these will not benefit the first connection of the session.

Difference in response time between http vs https

I tested my web site with 100 users with http and https. The response time obtained in https is much higher compared to the response time obtained in http. The response time of https is nearly four times greater than http. Can anyone explain me why the response time is higher in https compared to http? or do i need to change any SSL property in jmeter system.properties? Thanks in Advance..!
SSL Handshake assumes 4 requests for establishing a connection so first request should be something like 4x times longer than in case of HTTP. See The SSL handshake diagram for more info
However if you receive 4 times performance degradation for all requests - that doesn't sound right.
There are following JMeter properties which control SSL flows:
https.sessioncontext.shared - controls whether SSL session contexts are created per thread (if it's set to false) or shared (if it's set to true)
https.use.cached.ssl.context - controls if cached SSL context is being reused between iterations
These properties live in jmeter.properties file under /bin folder of your JMeter installation. It's also possible to override them using -J command line key as follows:
jmeter -Jhttps.sessioncontext.shared=true -Jhttps.use.cached.ssl.context=true
See Apache JMeter Properties Customization Guide for more details.
If above setting won't help you'll need to review your test plan and perhaps profile application to see where this extra time is spent.

LR: VUgen web_set_timeout function unrealistic?

I understand that VUGen's web_set_timeout function allows me to set a timeout value higher than the usual value (which seems to be 120 seconds).
What I do not understand: Doesn't this imply that all users would have to set their browser http POST timeout config value to a new, higher value? Don't I then test with a (simulated/virtual) user configuration that no real-world user would/could use?
Wouldn't I also require all proxies between the user and the webserver to be configured with an at-least-as-high timeout value to use a custom timeout value in the browser? Otherwise my user's transactions will fail while my load test would pass?
Context: Load test of an browser- (Ajax) based frontend with VUGen 9.51. Browser times out on web server request with Error -27728 Step download timeout (120 seconds) has expired when downloading non-resource(s), and I hesitate using the web_set_timeout fore obvious reasons.
Each browser has a different time-out value defined. This value can also be changed rather easily by users.
Have a look at http://support.microsoft.com/kb/181050 for info on IE timeouts.
In short it says:
Internet Explorer imposes a time-out limit for the server to return data.
By default, the time-out limit is as follows:
Internet Explorer 4.0 and Internet Explorer 4.01 5 minutes
Internet Explorer 5.x and Internet Explorer 6.x 60 minutes
Internet Explorer 7 and Internet Explorer 8 60 minutes
Internet Explorer does not wait endlessly for the server to come
back with data when the server has a problem.
Also many services that are used today are machine-to-machine services (othen SOAP requests
are used for this) and they may have time-outs that are interface specific.
The place in VuGen where this is set from the UI is from the "Run-Time Settings | Preferences | Options" - in this list there are the following timeouts that can be set:
HTTP-Request connect timeout default 120 seconds
HTTP-Request response timeout default 120 seconds
In practice however, if a normal web-ui takes more than 5-10 seconds to respond to user clicks then the service will be considered slow by the users.
The exception here is SAP EP where 30+ minutes of waiting for simple thins is OK ... :)

Background Intelligent Transfer Service and Amazon S3

I'm using SharpBITS to download file from AmazonS3.
> // Create new download job. BitsJob
> job = this._bitsManager.CreateJob(jobName, JobType.Download);
> // Add file to job.
> job.AddFile(downloadFile.RemoteUrl, downloadFile.LocalDestination);
> // Resume
> job.Resume();
It works for files which do no need authentication. However as soon as I add authentication query string for AmazonS3 file request the response from server is http state 403 -unauthorized. Url works file in browser.
Here is the HTTP request from BIT service:
HEAD /mybucket/6a66aeba-0acf-11df-aff6-7d44dc82f95a-000001/5809b987-0f65-11df-9942-f2c504c2c389/v10/summary.doc?AWSAccessKeyId=AAAAZ5SQ76RPQQAAAAA&Expires=1265489615&Signature=VboaRsOCMWWO7VparK3Z0SWE%2FiQ%3D HTTP/1.1
Accept: */*
Accept-Encoding: identity
User-Agent: Microsoft BITS/7.5
Connection: Keep-Alive
Host: s3.amazonaws.com
The only difference between the one from a web browser is the request type. Firefox makes a GET request and BITS makes a HEAD request. Are there any issues with Amazon S3 HEAD requests and query string authentication?
Regards, Blaz
You are probably right that a proxy is the only way around this. BITS uses the HEAD request to get a content length and decide whether or not it wants to chunk the file download. It then does the GET request to actually retrieve the file - sometimes as a whole if the file is small enough, otherwise with range headers.
If you can use a proxy or some other trick to give it any kind of response to the HEAD request, it should get unstuck. Even if the HEAD request is faked with a fictitious content length, BITS will move on to a GET. You may see duplicate GET requests in a case like this, because if the first GET request returns a content length longer than the original HEAD request, BITS may decide "oh crap, I better chunk this after all."
Given that, I'm kind of surprised it's not smart enough to recover from a 403 error on the HEAD request and still move on to the GET. What is the actual behaviour of the job? Have you tried watching it with bitsadmin /monitor? If the job is sitting in a transient error state, it may do that for around 20 mins and then ultimately recover.
Before beginning a download, BITS sends an HTTP HEAD request to the server in order to figure out the remote file's size, timestamp, etc. This is especially important for BranchCache-based BITS transfers and is the reason why server-side HTTP HEAD support is listed as an HTTP requirement for BITS downloads.
That being said, BITS bypasses the HTTP HEAD request phase, issuing an HTTP GET request right away, if either of the following conditions is true:
The BITS job is configured with the BITS_JOB_PROPERTY_DYNAMIC_CONTENT flag.
BranchCache is disabled AND the BITS job contains a single file.
Workaround (1) is the most appropriate, since it doesn't affect other BITS transfers in the system.
For workaround (2), BranchCache can be disabled through BITS' DisableBranchCache group policy. You'll need to do "gpupdate" from an elevated command prompt after making any Group Policy changes, or it will take ~90 minutes for the changes to take effect.