jTDS JDBC not throwing exception when it should - sql

I have a problem with the following code using the jTDS JDBC Driver. Everything works, and queries is no problem. But I don`t get an error/exception if the connection is failing. I have tried to enter a false IP, disable local network connection, provide false port number etc., but no luck. I really need to know when the connection fails.
It seems that everything stops at the line: "con = java.sql.DriverManager.getConnection(url, id, pass);" (But only when it really should throw an exception...)
import java.sql.SQLException;
public class Main {
public static void main(String[] args) throws ClassNotFoundException, SQLException {
java.sql.Connection con = null;
String url= "jdbc:jtds:sqlserver://x.x.x.x/DATABASE";
String id= "seret";
String pass = "secret";
Class.forName("net.sourceforge.jtds.jdbc.Driver");
System.out.println("Connecting to database...");
con = java.sql.DriverManager.getConnection(url, id, pass);
System.out.println("Connected?")
//Program never gets here, but does not close either.
if(con.isValid(1000)) System.out.println("Does not work either...");
if(con!=null) con.close();
}
}

I'm not sure why you don't get an exception. I do get a SQLException (SQLState=S1000) when using SQL Server 2008 and SQL Server 2000 with jTDS 1.2.4.
If upgrading your jTDS driver doesn't help you might try appending ";loginTimeout=20" to your URL string. So it would look like:
String url= "jdbc:jtds:sqlserver://x.x.x.x/DATABASE;loginTimeout=20";
Then rerun your application and wait at least 20 seconds. Hopefully you will get a timeout exception.
If the loginTimeout setting didn't help you can also play with the socketTimeout setting. Although see the jTDS FAQ concerning the implications of using socketTimeout. Basically you want to set it longer than the longest query you expect your application to execute.

Related

OpenDJ SDK Thread Pool Exception

We are using OpenDJ SDK for connecting with Directory Services. Below mentioned is code.
#Bean
public LDAPConnectionFactory createConnectionFactory(){
LDAPOptions ldapOptions = new LDAPOptions();
ldapOptions.setTimeout(30, TimeUnit.SECONDS);
final LDAPConnectionFactory factory = new LDAPConnectionFactory(host, port,ldapOptions);
Connections.newFixedConnectionPool(factory,connectionPoolSize);
return factory;
}
connection pool size parameter is set to 10 at present. The code was working fine, suddenly it started returning null object for getConnection() method on factory. When I comment out Connections.newFixedConnectionPool statement it works as per expected. Are we missing anything.
If you are creating a fixed connection pool, you should request a connection from it, not from the factory.
The issue is that you are actually not saving the returned pool.

Elmah - Logging sql in the database to review in elmah.axd

I have a .NET web application that frequently executes queries to get data from a local database.
In situations where the query doesn't run (due to an exception) or the query returns an unexpected set of data (such as an empty set). I want to be able to rebuild the query (replacing it's #parameters with the values actually used) and store the complete query in the database along with the exception.
I'm aware that I can do this through standard code but I was wondering whether it would be safer to do via Elmah?
Also would doing this via Elmah give me the ability to be able to view the executed sql through elmah.axd (when access is enabled)?
Unless the thrown exception includes the query with the actual values, ELMAH doesn't help you there other than logging the exception. You can catch the exception yourself and do a custom logging to ELMAH using the ErrorSignal.Raise method as explained here: How to use ELMAH to manually log errors?
I log SQL exceptions by passing the exception and the actual command to a new Exception class. The class wraps the SqlException and the System.Data.Common.DbCommand objects. Using that information I can create a message to provide the sql command details:
public override string Message
{
get
{
StringBuilder message = new StringBuilder("");
StringBuilder sql = new StringBuilder("");
sql.AppendFormat(" {0} ", Command.CommandText);
foreach (SqlParameter param in Command.Parameters)
{
sql.AppendFormat(" {0} - {1}", param.ParameterName,
param.Value.ToString());
}
message.AppendFormat("Error: {0} SQL: {1} User: {2}", SqlEx.Message,
sql, Username);
return message.ToString();
}
}
Finally, I use the ErrorSignal Raise method to log the message in Elmah:
Elmah.ErrorSignal.FromCurrentContext().Raise(new DetailSqlException(e.Exception as SqlException, e.Command, user));

this command is not available unless the connection is created with admin-commands enabled

When trying to run the following in Redis using booksleeve.
using (var conn = new RedisConnection(server, port, -1, password))
{
var result = conn.Server.FlushDb(0);
result.Wait();
}
I get an error saying:
This command is not available unless the connection is created with
admin-commands enabled"
I am not sure how do i execute commands as admin? Do I need to create an a/c in db with admin access and login with that?
Updated answer for StackExchange.Redis:
var conn = ConnectionMultiplexer.Connect("localhost,allowAdmin=true");
Note also that the object created here should be created once per application and shared as a global singleton, per Marc:
Because the ConnectionMultiplexer does a lot, it is designed to be
shared and reused between callers. You should not create a
ConnectionMultiplexer per operation. It is fully thread-safe and ready
for this usage.
Basically, the dangerous commands that you don't need in routine operations, but which can cause lots of problems if used inappropriately (i.e. the equivalent of drop database in tsql, since your example is FlushDb) are protected by a "yes, I meant to do that..." flag:
using (var conn = new RedisConnection(server, port, -1, password,
allowAdmin: true)) <==== here
I will improve the error message to make this very clear and explicit.
You can also set this in C# when you're creating your multiplexer - set AllowAdmin = true
private ConnectionMultiplexer GetConnectionMultiplexer()
{
var options = ConfigurationOptions.Parse("localhost:6379");
options.ConnectRetry = 5;
options.AllowAdmin = true;
return ConnectionMultiplexer.Connect(options);
}
For those who like me faced the error:
StackExchange.Redis.RedisCommandException: This operation is not
available unless admin mode is enabled: ROLE
after upgrading StackExchange.Redis to version 2.2.4 with Sentinel connection: it's a known bug, the workaround was either to downgrade the client back or to add allowAdmin=true to the connection string and wait for the fix.
Starting from 2.2.50 public release the issue is fixed.

SQL Azure - Transient "ExecuteReader requires an open connection" exception

I'm using SQL Azure in a Windows Azure app running as a cloud service. Most of the time my database actions works completely fine (that is, after handling all sorts of timeouts and what not), however i'm running into a problem that seems
using (var connection = new SqlConnection(m_connectionString))
{
m_ConnectionRetryPolicy.ExecuteAction(() => connection.Open());
using (var command = connection.CreateCommand())
{
command.CommandText = "SELECT * FROM X WHERE Y = Z";
var reader = m_CommandRetryPolicy.ExecuteAction(() => command.ExecuteReader());
return LoadData(reader).FirstOrDefault();
}
}
The line that fails is the Command.ExecuteReader with an:
ExecuteReader requires an open and available Connection. The connection's current state is closed
Things that i have already considered
I'm not "reusing" an old connection or saving a connection is a member variable
There should be no concurrency issues - the repository class that these methods belong to is created each time it is needed
Have anyone else experienced this? I could of course just add this to the list of exception which would yield a retry, but I'm not very comfortable with that as
I had a bunch of these errors a few days ago (West Europe) on my production deployment, but they went away by themselves. At the same time I was seeing timeouts, throttling and other errors from SQL Azure. I assume that there was a temporary problem with the platform (or at least the server that I am running on).
You probably aren't doing anything wrong in your code, but are suffering from degraded performance on SQL Azure. Try and handle the errors, perform retries, exponential back-off, queues (to reduce concurrency), splitting your load across databases — that sort of thing.
write every thing within try and catch,finally block.
as follows:
try
{
con.open();
m_ConnectionRetryPolicy.ExecuteAction(() => connection.Open());
using (var command = connection.CreateCommand())
{
command.CommandText = "SELECT * FROM X WHERE Y = Z";
var reader = m_CommandRetryPolicy.ExecuteAction(() => command.ExecuteReader());
return LoadData(reader).FirstOrDefault();
}
con.close();
}
catch(exception ex)
{
}
finally
{
con.close();
}
Remember to close connection in finally block as well.
There is an Enterprise Library that MS has produced specifically for SQL Azure, here are some examples from their patterns and Practice.
It's similar to what you are doing, however it does more on the reliability (and these examples show how to get a reliable connection)
http://msdn.microsoft.com/en-us/library/hh680899(v=pandp.50).aspx
Are you sure it's the reader that's failing and not the opening of the connection? I'm encountering an exception when I wrap the connection.Open() in the m_ConnectionRetryPolicy.ExecuteAction().
However it works just fine for me if I skip the ExecuteAction wrapper and open the connection using connection.OpenWithRetry(m_ConnectionRetryPolicy).
And I'm also using command.ExecuteReaderWithRetry(m_ConnectionRetryPolicy) which is working for me.
I have no idea though why it's not working when wrapped in ExecuteAction though.
I believe this means that Azure has closed the connection behind the scenes, without telling the connection pooler. This is by design. So, the connection pooler gives you what it thinks is an available, open connection, but when you try to use it, it finds out it's not open after all.
This seems very clunky to me, but it's the way Azure is at the moment.

Random error when testing with NHibernate on an in-Memory SQLite db

I have a system which after getting a message - enqueues it (write to a table), and another process polls the DB and dequeues it for processing. In my automatic tests I've merged the operations in the same process, but cannot (conceptually) merge the NH sessions from the two operations.
Naturally - problems arise.
I've read everything I could about getting the SQLite-InMemory-NHibernate combination to work in the testing world, but I've now ran into RANDOMLY failing tests, due to "no such table" errors. To make it clear - "random" means that the same test with the same exact configuration and code will sometimes fail.
I have the following SQLite configuration:
return SQLiteConfiguration
.Standard
.ConnectionString(x => x.Is("Data Source=:memory:; Version=3; New=True; Pooling=True; Max Pool Size=1;"))
.Raw(NHibernate.Cfg.Environment.ReleaseConnections, "on_close");
At the beginning of my test (every test) I fetch the "static" session provider, and kindly ask it to flush the existing DB clean, and recreate the schema:
public void PurgeDatabaseOrCreateNew()
{
using (var session = GetNewSession())
using (var tx = session.BeginTransaction())
{
PurgeDatabaseOrCreateNew(session);
tx.Commit();
}
}
private void PurgeDatabaseOrCreateNew(ISession session)
{
//http://ayende.com/Blog/archive/2009/04/28/nhibernate-unit-testing.aspx
new SchemaExport(_Configuration)
.Execute(false, true, false, session.Connection, null);
}
So yes, it's on a different session, but the connection is pooled on SQLite, so the next session I create will see the generated schema. Yet, while most of the times it works - sometimes the later "enqueue" operation will fail because it cannot see a table for my incoming messages.
Also - that seems to happen at max one or twice per test suite run; not all the tests are failing, just the first one (and sometimes another one. Not quite sure if it's the second or not).
The worst part is the randomness, naturally. I've told myself I've fixed this several times now, just because it simply "stopped failing". At random.
This happens on FW4.0, System.Data.SQLite x86 version, Win7 64b and 2008R2 (three differen machine in total), NH2.1.2, configured with FNH, on TestDriven.NET 32b precesses and NUnit console 32b processes.
Help?
Hi I'm pretty sure I have the exact same problem as you. I open and close multiple sessions per integration test. After digging through the SQLite connection pooling and some experimenting of my own, I've come to the following conclusion:
The SQLite pooling code caches the connection using WeakReferences, which isn't the best option for caching, since the reference to the connection(s) will be cleared when there is no normal (strong) reference to the connection and the GC runs. Since you can't predict when the GC runs, this explains the "randomness". Try and add a GC.Collect(); between closing one and opening another session, your test will always fail.
My solution was to cache the connection myself between opening sessions, like this:
public class BaseIntegrationTest
{
private static ISessionFactory _sessionFactory;
private static Configuration _configuration;
private static SchemaExport _schemaExport;
// I cache the whole session because I don't want it and the
// underlying connection to get closed.
// The "Connection" property of the ISession is what we actually want.
// Using the NHibernate SQLite Driver to get the connection would probably
// work too.
private static ISession _keepConnectionAlive;
static BaseIntegrationTest()
{
_configuration = new Configuration();
_configuration.Configure();
_configuration.AddAssembly(typeof(Product).Assembly);
_sessionFactory = _configuration.BuildSessionFactory();
_schemaExport = new SchemaExport(_configuration);
_keepConnectionAlive = _sessionFactory.OpenSession();
}
[SetUp]
protected void RecreateDB()
{
_schemaExport.Execute(false, true, false, _keepConnectionAlive.Connection, null);
}
protected ISession OpenSession()
{
return _sessionFactory.OpenSession(_keepConnectionAlive.Connection);
}
}
Each of my integrationtests inherits from this class, and calls OpenSession() to get a session. RecreateDB is called by NUnit before each test because of the [SetUp] attribute.
I hope this helps you or anyone else who gets this error.
Only thing that comes into mind that you are randomly leaving session open after the test. You must make sure any existing ISession is closed before you open another one. If you are not using the using() statement or calling Dispose() manually the session might still be alive somewhere causing those random exceptions.