I need to limit number of users that are logging in to my web app using weblogic.
Found solution is oracle documents "The weblogic.http.session.maxConcurrentRequest property limits the number of concurrent requests for a session. If the number of concurrent requests for a given session exceeds the specified value, the servlet container will start rejecting requests. By default, this property is set to -1, which indicates the servlet container does not impose any restrictions."
But no idea where or how to set this settings.
need to throw error message that user should only be logged 1 session at a time or kill the old session.
Add this as JAVA_OPTION in your domain's startup script.
Related
I'm new to haproxy and load balancing. I want to see what happens when a backend host is turned off while the proxy is running.
The problem is, if I turn off one of the backends and refresh the browser the page immediateltly exposes a 503 error to the user. After the next page load, it no longer gets the error since presumably that backend has been removed from the pool.
As a test I have set up two backend Flask apps and configured HAProxy to balance them like so:
backend app
mode http
balanace roundrobin
server app1 127.0.0.1:5001 check
server app2 127.0.0.1:5002 check
My understanding according to this:
https://www.haproxy.com/doc/aloha/7.0/haproxy/healthchecks.html#check-parameters
is that every 2 seconds a the backend hosts are pingged to see if they are up. Then they are removed from the pool if they are down. The 5xx error happens between the time I kill the backend and the 2 seconds.
I would think there is a way to get around this 5xx error by having HAProxy perform a little logic such that if a request from the frontend fails, it would then remove that failed backend from the pool and then switch to another and make another request. This way the user would never see the failure.
Is there a way to do this, or should I try something else so that my user does not get an error?
By default haproxy will retry 3 times (retries) with 1s intervals to the same backend. In order to allow to take another backend you should set option redispatch.
Also consider to (carefully, it can be hamrful):
decrease fall (default is 3),
decrease error-limit (default is 10) and set on-error to mark-down or sudden-death
tune healthcheck intervals with inter/fastinter/downinter
Note: Haproxy retries only on connection errors (e.g. ECONNNREFUSED like in your case), it will not resend/resubmit request/data.
I have developed some JAX-RS web services and deployed the WAR file to a managed server on WebLogic 12.2.1. When I call a web service, either through a client program, or via web browser, I noticed that nothing is getting updated in E:\MLM\MyDomain\servers\MyAppSrv01\logs\access.log. This file stays empty all the time. When the next day comes (at 12.00am), the file will roll over to access.logNNNNN (e.g. access.log00004) and then I can see some of the GET and POST calls of the previous day appearing in access.logNNNNN. The strange thing is that only some of the web service calls appear in access.logNNNNN, even though I make many calls throughout the testing. What could be the problem?
Thanks in advance.
You are not seeing access logs at Run Time due to Buffer Size defined. To reduce I/O Weblogic will write logs to Buffer first and when the limit reaches it will write to access.log file.
Log Buffer Size
The maximum size (in kilobytes) of the buffer that stores HTTP requests. When the buffer reaches this size, the server writes the data to the HTTP log file. Use the LogFileFlushSecs property to determine the frequency with which the server checks the size of the buffer.
You can set this value to 0 for run-time logging.
My problem is when i retrieve too many data from database and select using odbc node
when my workflow is run exception The timeout (30 secs) was exceeded while waiting for a response from DropPoint transactionRequest
Please help how to resize defalut timelimit of workflow.
The timeout you mention is the timeout on the connection that is using a DropPoint and not a timeout on the workflow as a whole.
From within the connections pane, open the connection you are using on the workflow and modify the timeout setting there.
Separate to this, if you're calling the workflow via rest api, you can set a timeout. To override the default there, add the query string: _timeout=300 to the url in your consuming app (i.e. Not in the endpoint url setting in Flowgear).
We have developed an application where after a successful request, session has to be destroyed. This is working fine, when we have a single tomcat.
But, this is not happening when we use multiple tomcats under Apache simple load balancer (We are using Load Balancer, for balancing the requests between two tomcats, which are hosting the same application).
The SessionID that is created and have processed successfully, can be used for one more transaction, after which it is getting killed.
Moreover, the SessionID value is being appended with either 'n1' or 'n2' (SessionID-n1). I am not sure of why is this happening.
Please help me to resolve this issue.
We have a configuration setup as below:
Load Balancer
/ \
Cluster1 Cluster2
| |
Tomcat1 Tomcat2
Thanks,
Sandeep
If you have configured each Tomcat node to have a "jvmRoute", then the string you specify there will be appended to the session identifier. This can help your load-balancer determine which back-end server should be used to serve a particular request. It sounds like this is exactly what you have done. Check your CATALINA_BASE/conf/server.xml file for the word "jvmRoute" to confirm.
If you only use a session for a single transaction, why are you bothering to create the session in the first place? Is a request == transaction?
If you are sure you are terminating the session when the transaction is complete, you should be okay even if the client wants to try to make a new request with the same session id. It will no longer be valid and therefore useless to the client.
It's not clear from your question whether there is an actual problem with the session because you claim it is "getting killed" which sounds like what you want it to do. If you provide more detail on the session expiration thing, I'll modify my answer accordingly.
We are maintaining an application which uses Jboss as its AS. Recently customer complained about OOM error and he was unable to login. We restarted Jboss and added configurations to collect heapdump.
Customer's application usage pattern: Customer opens a page containing 10 charts which are rendered in an inframe. Flex is used for UI. This page refreshes itself every 5 minutes.
Request Flow: The application is built in such a way that all the requests go through Apache (Http) proxy server. Every 5 minutes the dashboard request has to pass through Apache proxy server.
After 5 days OOM error occurred and on analyzing the heapdump we noticed that the error was due to "org.apache.catalina.session.StandardSession", loaded by "org.jboss.mx.loading.UnifiedClassLoader3 # 0x7b440dd70" occupy 1,776,569,336 (87.71%) bytes.
Full description is:
847,599 instances of "org.apache.catalina.session.StandardSession", loaded by "org.jboss.mx.loading.UnifiedClassLoader3 # 0x7b440dd70" occupy 1,776,569,336 (87.71%) bytes. These instances are referenced from one instance of "org.apache.catalina.Session[]", loaded by "org.jboss.mx.loading.UnifiedClassLoader3 # 0x7b440dd70"
Keywords
org.jboss.mx.loading.UnifiedClassLoader3 # 0x7b440dd70
org.apache.catalina.session.StandardSession
org.apache.catalina.Session[]
Common Path To the Accumulation Point says:
java.lang.Thread # 0x7b4a5f358 ContainerBackgroundProcessor[StandardEngine[jboss.web]]
org.apache.catalina.session.StandardSession # 0x797b49b70
I am new to Jboss and proxy servers. How to debug this issue? Could you please help me.
Thanks
M
org.apache.catalina.session.StandardSession is a class representing user http session. I would say, that your every request creates new http session on JBoss which is never expired. Either pass session cookie back to JBoss with every refresh to reuse sessions and not to create new ones. Or configure sensible http session expiration policy in you JBoss.
Good luck,
Plumbr team