We are using websphere application server 8.5.0.0. we have a requirement where we have to query a LDAP server to get the customer details. I tried to configure the connection pool as described here and here.
I passed the below JVM arguments
-Dcom.sun.jndi.ldap.connect.pool.maxsize=5
-Dcom.sun.jndi.ldap.connect.pool.timeout=60000
-Dcom.sun.jndi.ldap.connect.pool.debug=all
Below is a sample code snippet
Hashtable<String,String> env = new Hashtable<String,String>();
...
...
env.put("com.sun.jndi.ldap.connect.pool", "true");
env.put("com.sun.jndi.ldap.connect.timeout", "5000");
InitialDirContext c = new InitialDirContext(env);
...
...
c.close();
I have two issues here
When I am calling the service for the 6th time, I am getting javax.naming.ConnectionException: Timeout exceeded while waiting for a connection: 5000ms. I checked the connection pool debug logs and I noticed the connections are not returning back to the pool immediately despite closing the context safely in a finally block. The connections are released after some time and expired after sometime after the release. There after if I call the service again, it connects to the LDAP server but new connections are being created.
I tried to execute the code and I am able to see the connection pool debug logs. But the logs are being logged in System.Err log. Is this an issue? Can I ignore it?
But when I run the code as a standalone application(multithreaded with loop of 50 times), the connections are returned/released immediately.
Can anyone please let me know what am I doing wrong?
Related
I am using Apache Curator v4.3.0 (ZK v3.5.8), and I noticed that in some disconnect/reconnect scenarios, I stop getting a RECONNECTED event to the registered listener/s.
CuratorFramework client = ...;
// retry policy is RetryUntilElapsed with Integer.MAX_VALUE
// sessionTimeout is 15 sec
// connectionTimeout is 5 sec
client.getConnectionStateListenable().addListener(new ConnectionStateListener()...
Although I do see that the ConnectionStateManager prints the state change:
[org.apache.zookeeper.ClientCnxn] - Client session timed out, have not heard from server in 15013ms for sessionid 0x10000037e340012, closing socket connection and attempting reconnect
[org.apache.zookeeper.ClientCnxn] - Opening socket connection to server
...
[org.apache.curator.ConnectionState] - Session expired event received
[org.apache.zookeeper.ClientCnxn] - Session establishment complete on server
[org.apache.curator.framework.state.ConnectionStateManager] - State change: RECONNECTED
Usually right after I see my listener called on stateChanged, but not always.
The CuratorFramework client is shared between multiple components registering different listeners. I didn't see any restriction to have only one client per listener. But, when I don't share it, the problem doesn't occur anymore.
Any suggestions on how to proceed debugging this problem?
Thank you,
Meron
This appears to be the bug that was fixed in Curator 5.0.0 - https://issues.apache.org/jira/browse/CURATOR-525 - if you can please test with 5.0.0 and see if it fixes the issue.
I have deployed OpenDJ application on one of the instances and written a java based application as well to access user details from OpenDJ using unbound LDAP SDK. All the things are up and running and working as well.
The issue occurs when the concurrent request for search user hit the OpenDJ and I get the exception as:
Error while checking the user abdulwaheed in LDAP: Error code 81,
message LDAPSearchException(resultCode=81 (server down), numEntries=0,
numReferences=0, errorMessage='The connection to server
rfhat-iam-opendj.net:1389 was closed while waiting for a response to
search request SearchRequest(baseDN='uid=abdulwaheed
,ou=people,dc=domain,dc=com', scope=BASE, deref=NEVER, sizeLimit=1,
timeLimit=0, filter='(objectClass=*)', attrs={}).')
Previously, I thought the issue can be with my java application which is not able to handle multiple concurrent requests and not able to get any free connection but after looking into error code, the error is coming from OpenDj (LDAP_ERROR 81).
I looked into the OpenDJ connection as well and seems like all the config are set to its default value (unlimited).
So, I am not sure what can be the issue and where I can look into it further?
I have the following code:
package ejbs;
import javax.annotation.Resource;
import javax.ejb.Singleton;
import javax.ejb.Timeout;
import javax.ejb.Timer;
import javax.ejb.TimerConfig;
import javax.ejb.TimerService;
#Singleton
public class timerbackup {
#Resource
private TimerService timerservice;
#Timeout
public void methodTimeout(Timer timer)
{
System.out.println("timeout");
}
public void settimer(long in)
{
Timer timer=timerservice.createSingleActionTimer(in,new TimerConfig());
}
}
After deploying the application appeared the error message "EJB Timer Service is not available".
To solve the problem i followed these steps:
Access the glash fish admin console (http://localhost:4848)
Go to Configurations->server-config->EJB Container
Select the tab EJB Timer Service
Then fill out Timer Datasource: with your JDBC Resource (i used "jdbc/projecto_final")
Restart the server
As suggested in Set/configure the EJB Timer Service’s DataSource.
This resulted but after sometime the TimerService stopped working. After deploying the application appears the following error messages:
Severe: Exception while loading the app
Severe: Undeployment failed for context /ProjetoEE1
Info: /file:/E:/formacaoJAVA/2moduloJEE/pratica/projecto_final /projfinal2/ProjetoEE1/build/web/WEB-INF/classes/_DEFAULT_PU logout successful
Warning: EJB Timer Service is not available. Timers for application with id 96332697224871936 will not be deleted
The Set/configure the EJB Timer Service’s DataSource also mention this problem, and present a solution in Glassfish DeploymentException: Error in linking security policy for.
The solutions presented in Glassfish DeploymentException: Error in linking security policy for consists basically in delete some files. The answer more voted suggests basically the following:
Stoped the Glassfish server
Deleted all the content from glassfishhome/glassfish/domains/ yourdomainname/generated
Started Glassfish
I have installed the "GlassFish Server 4.1.1", and this doesn´t work.
The second answer more voted suggests the basically the following:
1.All that's needed to fix this problem is delete the entire OSGi cache under $GLASSFISH_HOME/glassfish/domains//osgi-cache
This also doesn´t work.
What i can do? Any help will be very appreciate
Best regards,
Rafael Costa
I have solved the "same" problem in
deleting glassfish/domains/domain-name/generated folder completely
building application again
restarting glassfish application
In my case, I have installed a new version of my application after a Pull/Push operation with GIT and my application has stopped to work. So I know that before this new build my application worked well and that nothing has been changed on Glassfish.
I have found some explanation on another following site
https://dzone.com/articles/solving-ejb-timer-service-not-available-error-in-g-1
The Glassfish application server uses its embedded JAVADB to persist the state of its available EJB timers. Not setting the data resource for the timer service correctly prevents the EJB timers from being restored and eventually from functioning properly. In this case, normally the “EJB timer service not available” error message is returned. This problem prevents any application that uses an EJB timer service from being started or deployed.
There are two procedures available to overcome such blocking situations:
The first solution is to go to JDBC connection pools and double check the health of the Timerpool connection pool by pinging it. If the ping fails then the connection pool needs to be checked or to be redefined.
If pinging the connection pool is successful, then the problem could be the presence of the EJB timer marker file. A marker file is created whenever a problem occurs during the EJB timer service start-up or restore.
Deleting the marker will solve the problem. The marker file "ejb-timer-service-app" located under as-install-parent/glassfish/domains/domain-name/generated/ejb/. Dont forget to restart Glassfish !
Replace
import javax.ejb.Singleton;
With
import javax.inject.Singleton;
It worked for me. I'm using Derby database is it the case for you?
I solved the problem. If i remenber, i created a new JDBC resource and a new JDBC Connection Pool.
The following link explains how to create a JDBC resource and a JDBC Connection Pool.
General Steps for Creating a JDBC Resource
The JDBC resource and the JDBC Connection Pool can be created using the admin console or the asadmin utility.
The following link explains how to use the asadmin utility.Using the asadmin Utility
(I used this utility because in the admin console when i tried to create a JDBC resource and a JDBC Connection Pool appeared an error)
In the admin console, in the created JDBC Resource the field "Pool Name" should equals the name of the created JDBC Connection Pool.
After that i followed these steps:
Configurations->server-config->EJB Container
Select the tab EJB Timer Service
Fill the field Timer Datasource with the name of the JDBC resource.
Restart the server
Any question please feel free.
Best Regards Rafael Santos Costa
Hello I met the same problem if you have glassfish 4.1.1 there is probably an instability in the server with respect to timer.
Solution: update glassfish 4.1 to glassfish 5 and deploy the web application in this new server
We recently migrated to Spring boot 1.3.1 from the traditional spring project.
Our existing clients use Tyrus 1.12 as a websocket client.
After the upgrade, we found that the clients no longer connect and throws AuthenticationException. Strangely, they are able to connect for the first time since server restart and soon after throws AuthenticationException.
Digging a bit more, I found that Tyrus receives a 401 initially and passes on credentials subsequently. The server logs indicate the same behaviour, by first assigning ROLE_ANONYMOUS and then the correct role, ROLE_GUEST there after.
It seems like after the negotiation, the server closes connection and disconnects.
I observed the same behaviour when using spring stomp websocket client with Tyrus.
ClientManager container = ClientManager.createClient();
container.getProperties().put("org.glassfish.tyrus.client.sharedContainer", true);
container.getProperties().put(ClientProperties.CREDENTIALS, new Credentials("guest", "guest"));
StandardWebSocketClient webSocketClient = new StandardWebSocketClient(container);
final CountDownLatch messageLatch = new CountDownLatch(10);
WebSocketStompClient stompClient = new WebSocketStompClient(webSocketClient);
This same server setup works fine when the credentials are sent in the header.
stompClient.connect(url, getHandshakeHeaders("guest", "guest"), handler);
And this will NOT work since the credentials are not in the header
ListenableFuture<StompSession>session = stompClient.connect(url, handler, "localhost", "8080");
I am not understanding why it is working one way and not the other.
After upgrading to spring-boot, our software is no longer backwards compatible and will have to ask all our external clients to inject the authorization in the header before receiving a 401.
Can someone please help?
My earlier post with stacktrace
I'm having a hard time trying to get my task to stay persistent and run indefinitely from a WCF service. I may be doing this the wrong way and am willing to take suggestions.
I have a task that starts to process any incoming requests that are dropped into a BlockingCollection. From what I understand, the GetConsumingEnumerable() method is supposed to allow me to persistently pull data as it arrives. It works with no problem by itself. I was able to process dozens of requests without a single error or flaw using a windows form to fill out the request and submit them. Once I was confident in this process I wired it up to my site via an asmx web service and used jQuery ajax calls to submit request.
The site submits request based on a url that is submitted, the Web Service downloads the html content from the url and looks for other urls within the content. It then proceeds to create a request for each url it finds and submits it to the BlockingCollection. Within the WCF service, if the application is Online (i.e. Task has started) - it pulls the request using the GetConsumingEnumerable via a Parallel.ForEach and Processes the request.
This works for the first few submissions, but then the task just stops unexpectedly. Of course, this is doing 10x more request than I could simulate in testing - but I expected it to just throttle. I believe the issue is in my method that starts the task:
public void Start()
{
Online = true;
Task.Factory.StartNew(() =>
{
tokenSource = new CancellationTokenSource();
CancellationToken token = tokenSource.Token;
ParallelOptions options = new ParallelOptions();
options.MaxDegreeOfParallelism = 20;
options.CancellationToken = token;
try
{
Parallel.ForEach(FixedWidthQueue.GetConsumingEnumerable(token), options, (request) =>
{
Process(request);
options.CancellationToken.ThrowIfCancellationRequested();
});
}
catch (OperationCanceledException e)
{
Console.WriteLine(e.Message);
return;
}
}, TaskCreationOptions.LongRunning);
}
I've thought about moving this into a WF4 Service and just wire it up in a Workflow and use Workflow Persistence, but am not willing to learn WF4 unless necessary. Please let me know if more information is needed.
The code you have shown is correct by itself.
However there are a few things that can go wrong:
If an exception occurs, your task stops (of course). Try adding a try-catch and log the exception.
If you start worker threads in a hosted environment (ASP.NET, WCF, SQL Server) the host can decide arbitrarily (without reason) to shut down any worker process. For example, if your ASP.NET site is inactive for some time the app is shut down. The hosts that I just mentioned are not made to have custom threads running. Probably, you will have more success using a dedicated application (.exe) or even a Windows Service.
It turns out the cause of this issue was with the WCF Binding Configuration. The task suddenly stopped becasue the WCF killed the connection due to a open timeout. The open timeout setting is the time that a request will wait for the service to open a connection before timing out. In certain situations, it reached the limit of 10 max connection and caused the incomming connections to get backed up waiting for a connection. I made sure that I closed all connections to the host after the transactions were complete - so I gave in to upping the max connections and the open timeout period. After this - it ran flawlessly.