OpenDJ SDK Thread Pool Exception - ldap

We are using OpenDJ SDK for connecting with Directory Services. Below mentioned is code.
#Bean
public LDAPConnectionFactory createConnectionFactory(){
LDAPOptions ldapOptions = new LDAPOptions();
ldapOptions.setTimeout(30, TimeUnit.SECONDS);
final LDAPConnectionFactory factory = new LDAPConnectionFactory(host, port,ldapOptions);
Connections.newFixedConnectionPool(factory,connectionPoolSize);
return factory;
}
connection pool size parameter is set to 10 at present. The code was working fine, suddenly it started returning null object for getConnection() method on factory. When I comment out Connections.newFixedConnectionPool statement it works as per expected. Are we missing anything.

If you are creating a fixed connection pool, you should request a connection from it, not from the factory.
The issue is that you are actually not saving the returned pool.

Related

this command is not available unless the connection is created with admin-commands enabled

When trying to run the following in Redis using booksleeve.
using (var conn = new RedisConnection(server, port, -1, password))
{
var result = conn.Server.FlushDb(0);
result.Wait();
}
I get an error saying:
This command is not available unless the connection is created with
admin-commands enabled"
I am not sure how do i execute commands as admin? Do I need to create an a/c in db with admin access and login with that?
Updated answer for StackExchange.Redis:
var conn = ConnectionMultiplexer.Connect("localhost,allowAdmin=true");
Note also that the object created here should be created once per application and shared as a global singleton, per Marc:
Because the ConnectionMultiplexer does a lot, it is designed to be
shared and reused between callers. You should not create a
ConnectionMultiplexer per operation. It is fully thread-safe and ready
for this usage.
Basically, the dangerous commands that you don't need in routine operations, but which can cause lots of problems if used inappropriately (i.e. the equivalent of drop database in tsql, since your example is FlushDb) are protected by a "yes, I meant to do that..." flag:
using (var conn = new RedisConnection(server, port, -1, password,
allowAdmin: true)) <==== here
I will improve the error message to make this very clear and explicit.
You can also set this in C# when you're creating your multiplexer - set AllowAdmin = true
private ConnectionMultiplexer GetConnectionMultiplexer()
{
var options = ConfigurationOptions.Parse("localhost:6379");
options.ConnectRetry = 5;
options.AllowAdmin = true;
return ConnectionMultiplexer.Connect(options);
}
For those who like me faced the error:
StackExchange.Redis.RedisCommandException: This operation is not
available unless admin mode is enabled: ROLE
after upgrading StackExchange.Redis to version 2.2.4 with Sentinel connection: it's a known bug, the workaround was either to downgrade the client back or to add allowAdmin=true to the connection string and wait for the fix.
Starting from 2.2.50 public release the issue is fixed.

Simpleroleprovider causing remote transaction inside transactionscope

I am in the process of upgrading asp.net membership to the new simplemembership provider in MVC4. This is an Azure/Sql Azure app which runs fine on localhost but fails when deployed. I have code in a transaction as follows:
TransactionOptions toptions = new TransactionOptions();
toptions.IsolationLevel = System.Transactions.IsolationLevel.Serializable;
using (TransactionScope trans = new TransactionScope(TransactionScopeOption.Required, toptions))
{
try
{
... do a bunch of database stuff in a single dbContext ...
var roleprov = (SimpleRoleProvider)Roles.Provider;
string[] roles = roleprov.GetRolesForUser(Username);
// above line fails with The transaction manager has disabled its support for remote/network transactions. (Exception from HRESULT: 0x8004D024)
}
}
I am using this technique to populate the Roles classes. The stack trace seems to indicate that it is indeed trying to fire off a sub-transaction to complete that call. The simplemembership tables are in a different db. How can I retrieve role info from the role provider inside the context of a separate transaction?
The problem is that GetRolesForUser causes a new connection to open to a second database, and that in turn picks up that it is in a TransactionScope. In turn this (MSDN - System.Transactions Integration with SQL Server) then promotes to the DTC. You could try a few options:
Get roles before the transaction starts
You could retrieve string[] roles outside your TransactionScope. Is there a reason you need to get them inside the scope? Given that you say:
How can I retrieve role info from the role provider inside the context of a separate transaction
it sounds like you could get the role info before the TransactionScope and have no problems.
Turn off transactions on the simple membership connection string
You can tell a connection string not to take part in transactions by putting "enlist=false" (see SqlConnection.ConnectionString) in the connection string, so this might be one option for you if you never need transactions on the database you use for Simple Membership.
Try opening the Simple Membership connection before the transaction
For SimpleRoleProvider it creates it's database object, and then opens it the first time it uses it. But, it doesn't close it until .... Scratch that, the connection is opened on each call to GetRolesForUser so you are out of luck. I was thinking you could call GetRolesForUser once before TransactionScope is opened, and then again inside the scope using the already open connection - you can't.
Play with the IObjectContextAdapter
Disclaimer: I can't promise this will work as I can't test with your setup.
You can play tricks to prevent promotion with two connection strings by opening the non-transaction connection string outside the transaction scope first, and then the transaction shouldn't be promoted. This can also be used if you cause the same connection to Close and then Open inside the same transaction scope (which would otherwise cause promotion).
You could try this with your context, and see if that stopped the GetRolesForUser promoting the transaction, but I doubt that would work as GetRolesForUser causes the connection to open if it isn't already. As I can't test in your scenario, I will include it in case it helps.
using (var db = new ExampleContext())
{
var adapter = db as System.Data.Entity.Infrastructure.IObjectContextAdapter;
using (var conn = adapter.ObjectContext.Connection)
{
conn.Open();
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required))
{
// perform operations
db.SaveChanges();
// perform more operations
db.SaveChanges();
// perform even more operations
db.SaveChanges();
// If you don't complete, the transaction won't commit and you will lose the changes
scope.Complete();
}
}
}

jTDS JDBC not throwing exception when it should

I have a problem with the following code using the jTDS JDBC Driver. Everything works, and queries is no problem. But I don`t get an error/exception if the connection is failing. I have tried to enter a false IP, disable local network connection, provide false port number etc., but no luck. I really need to know when the connection fails.
It seems that everything stops at the line: "con = java.sql.DriverManager.getConnection(url, id, pass);" (But only when it really should throw an exception...)
import java.sql.SQLException;
public class Main {
public static void main(String[] args) throws ClassNotFoundException, SQLException {
java.sql.Connection con = null;
String url= "jdbc:jtds:sqlserver://x.x.x.x/DATABASE";
String id= "seret";
String pass = "secret";
Class.forName("net.sourceforge.jtds.jdbc.Driver");
System.out.println("Connecting to database...");
con = java.sql.DriverManager.getConnection(url, id, pass);
System.out.println("Connected?")
//Program never gets here, but does not close either.
if(con.isValid(1000)) System.out.println("Does not work either...");
if(con!=null) con.close();
}
}
I'm not sure why you don't get an exception. I do get a SQLException (SQLState=S1000) when using SQL Server 2008 and SQL Server 2000 with jTDS 1.2.4.
If upgrading your jTDS driver doesn't help you might try appending ";loginTimeout=20" to your URL string. So it would look like:
String url= "jdbc:jtds:sqlserver://x.x.x.x/DATABASE;loginTimeout=20";
Then rerun your application and wait at least 20 seconds. Hopefully you will get a timeout exception.
If the loginTimeout setting didn't help you can also play with the socketTimeout setting. Although see the jTDS FAQ concerning the implications of using socketTimeout. Basically you want to set it longer than the longest query you expect your application to execute.

Random error when testing with NHibernate on an in-Memory SQLite db

I have a system which after getting a message - enqueues it (write to a table), and another process polls the DB and dequeues it for processing. In my automatic tests I've merged the operations in the same process, but cannot (conceptually) merge the NH sessions from the two operations.
Naturally - problems arise.
I've read everything I could about getting the SQLite-InMemory-NHibernate combination to work in the testing world, but I've now ran into RANDOMLY failing tests, due to "no such table" errors. To make it clear - "random" means that the same test with the same exact configuration and code will sometimes fail.
I have the following SQLite configuration:
return SQLiteConfiguration
.Standard
.ConnectionString(x => x.Is("Data Source=:memory:; Version=3; New=True; Pooling=True; Max Pool Size=1;"))
.Raw(NHibernate.Cfg.Environment.ReleaseConnections, "on_close");
At the beginning of my test (every test) I fetch the "static" session provider, and kindly ask it to flush the existing DB clean, and recreate the schema:
public void PurgeDatabaseOrCreateNew()
{
using (var session = GetNewSession())
using (var tx = session.BeginTransaction())
{
PurgeDatabaseOrCreateNew(session);
tx.Commit();
}
}
private void PurgeDatabaseOrCreateNew(ISession session)
{
//http://ayende.com/Blog/archive/2009/04/28/nhibernate-unit-testing.aspx
new SchemaExport(_Configuration)
.Execute(false, true, false, session.Connection, null);
}
So yes, it's on a different session, but the connection is pooled on SQLite, so the next session I create will see the generated schema. Yet, while most of the times it works - sometimes the later "enqueue" operation will fail because it cannot see a table for my incoming messages.
Also - that seems to happen at max one or twice per test suite run; not all the tests are failing, just the first one (and sometimes another one. Not quite sure if it's the second or not).
The worst part is the randomness, naturally. I've told myself I've fixed this several times now, just because it simply "stopped failing". At random.
This happens on FW4.0, System.Data.SQLite x86 version, Win7 64b and 2008R2 (three differen machine in total), NH2.1.2, configured with FNH, on TestDriven.NET 32b precesses and NUnit console 32b processes.
Help?
Hi I'm pretty sure I have the exact same problem as you. I open and close multiple sessions per integration test. After digging through the SQLite connection pooling and some experimenting of my own, I've come to the following conclusion:
The SQLite pooling code caches the connection using WeakReferences, which isn't the best option for caching, since the reference to the connection(s) will be cleared when there is no normal (strong) reference to the connection and the GC runs. Since you can't predict when the GC runs, this explains the "randomness". Try and add a GC.Collect(); between closing one and opening another session, your test will always fail.
My solution was to cache the connection myself between opening sessions, like this:
public class BaseIntegrationTest
{
private static ISessionFactory _sessionFactory;
private static Configuration _configuration;
private static SchemaExport _schemaExport;
// I cache the whole session because I don't want it and the
// underlying connection to get closed.
// The "Connection" property of the ISession is what we actually want.
// Using the NHibernate SQLite Driver to get the connection would probably
// work too.
private static ISession _keepConnectionAlive;
static BaseIntegrationTest()
{
_configuration = new Configuration();
_configuration.Configure();
_configuration.AddAssembly(typeof(Product).Assembly);
_sessionFactory = _configuration.BuildSessionFactory();
_schemaExport = new SchemaExport(_configuration);
_keepConnectionAlive = _sessionFactory.OpenSession();
}
[SetUp]
protected void RecreateDB()
{
_schemaExport.Execute(false, true, false, _keepConnectionAlive.Connection, null);
}
protected ISession OpenSession()
{
return _sessionFactory.OpenSession(_keepConnectionAlive.Connection);
}
}
Each of my integrationtests inherits from this class, and calls OpenSession() to get a session. RecreateDB is called by NUnit before each test because of the [SetUp] attribute.
I hope this helps you or anyone else who gets this error.
Only thing that comes into mind that you are randomly leaving session open after the test. You must make sure any existing ISession is closed before you open another one. If you are not using the using() statement or calling Dispose() manually the session might still be alive somewhere causing those random exceptions.

dataSource injection in a Grails service

I have a service with application scope, not transactional.
I have a service method which:
uses the injected dataSource to create a stored procedure call [using Sql.call{...}]. Executes and traverse the resultset.
Based on the resultset, I subdivide the resultsets into equal sizes chunks and process them in multiple threads.
Each thread tries to do Sql sql = new Sql(dataSource)
Here a deadlock occurs.
Why is that? Does dataSource not return a possibly new or an idle connection?
Try to look into Gpars : It's a groovy parallalization framework.
I run into exactly the same issue. After hours of searching i've found the solution.
In your Datasource.groovy configfile you are able to set the parameters for the connection pooling to the database.
I've changed the minIdle, maxIdle and maxActive settings of the http://commons.apache.org/proper/commons-dbcp/apidocs/org/apache/commons/dbcp/BasicDataSource.html so that my configfile looks something like this:
dataSource {
url = "jdbc:mysql://127.0.0.1/sipsy_dev?autoReconnect=true&zeroDateTimeBehavior=convertToNull"
driverClassName = "com.mysql.jdbc.Driver"
username = "sipsy_dev"
password = "sipsy_dev"
pooled = true
properties {
minEvictableIdleTimeMillis=1800000
timeBetweenEvictionRunsMillis=1800000
numTestsPerEvictionRun=3
testOnBorrow=true
testWhileIdle=true
testOnReturn=true
minIdle=100
maxIdle=250
maxActive=500
validationQuery="SELECT 1"
}
dialect = 'org.hibernate.dialect.MySQL5InnoDBDialect'
}
When you are not in a transaction, you have to release the connection that GroovySQL picks up from the datasource. The pool runs out of connections and that's why it locks up.
Inside a transaction TransactionAwareDataSourceProxy will take care of sharing the connection and therefore releasing the connection from GroovySQL isn't required in that case. See http://jira.grails.org/browse/GRAILS-5454 for details.
This is a better way to use GroovySQL in Grails since the OpenSessionInView (OSIV) interceptor will take care of closing the connection and it will share the same database connection as Hibernate. This method works in both cases: inside transactions and outside transactions.
Sql sql = new Sql(sessionFactory.currentSession.connection())