I've tried using Base.exec / new DB("default").exec("refresh materialized view concurrently ... and neither work. No errors, but the statement isn't run.
Finally figured this out. Not sure why, but Base.exec won't work, only Statement.executeUpdate.
// Open your connection
String sql = "refresh materialized view concurrently MY_VIEW";
try {
Connection conn = new DB("default").connection();
Statement stmt = conn.createStatement();
stmt.executeUpdate(sql);
conn.commit();
stmt.close();
} catch (SQLException e) {
e.printStackTrace();
}
Related
I am using the PetrelLogger.NewAsyncProgress which seems to work well. However I can't figure out how to report an error with my task. Once I Dispose of the NewAsyncProgress, it reports 'Success' for my task.
I have tried setting the ProgressStatus = -1, but that didn't make a difference.
Example:
using (_asyncProgress = PetrelLogger.NewAsyncProgress("Doing Job", ProgressType.Default, (AsyncProgressCanceledCallback)AsyncProgressCanceled, this))
{
try
{
//Do Something
_asyncProgress.ProgressStatus = 100;
}
catch (Exception e)
{
//Error happened
_asyncProgress.ProgressStatus = -1;
}
}
So if an exception is thrown, the task manager result is Success 100%. Any ideas?
It's not possible in Ocean at the moment. However, we have such requirement recorded, so it can be implemented in one of future releases
I am using Datanucleus JDO on top of HSqlDb.
I would like to execute the following SQL statement to tell HsqlDb to set the write delay to 0:
"SET WRITE_DELAY 0"
Is there a way I can do this from a JDO PersistenceManager or a PersistenceManagerFactory?
On a sidenote: I have tried to modify write_delay by using the following connection URL:
jdbc:hsqldb:file:data/hsqldb/dbbench;write_delay=false
It didn't work. I debugged the HsqlDb sources and I could still see the write delay being set to 10 seconds.
I think I have found a solution that will work for me:
public PersistenceManager getPersistenceManager() {
PersistenceManager persistenceManager =
_persistenceManagerFactory.getPersistenceManager();
JDOConnection dataStoreConnection =
persistenceManager.getDataStoreConnection();
Object nativeConnection = dataStoreConnection.getNativeConnection();
if(! (nativeConnection instanceof Connection) ){
return persistenceManager;
}
Connection connection = (Connection) nativeConnection;
try {
Statement statement = connection.createStatement();
statement.executeUpdate("SET WRITE_DELAY 0");
statement.close();
} catch (SQLException e) {
e.printStackTrace();
}
return persistenceManager;
}
You can write a startup script, dbbench.script in this example, and put the SQL in there.
See: http://best-practice-software-engineering.ifs.tuwien.ac.at/technology/tech-hsqldb.html
I think this page
http://www.datanucleus.org/products/accessplatform/jdo/datastore_connection.html
tells all needed. No ?
I made a test class against the repository methods shown below:
public void AddFile<TFileType>(TFileType FileToAdd) where TFileType : File
{
try
{
_session.Save(FileToAdd);
_session.Flush();
}
catch (Exception e)
{
if (e.InnerException.Message.Contains("Violation of UNIQUE KEY"))
throw new ArgumentException("Unique Name must be unique");
else
throw e;
}
}
public void RemoveFile(File FileToRemove)
{
_session.Delete(FileToRemove);
_session.Flush();
}
And the test class:
try
{
Data.File crashFile = new Data.File();
crashFile.UniqueName = "NonUniqueFileNameTest";
crashFile.Extension = ".abc";
repo.AddFile(crashFile);
Assert.Fail();
}
catch (Exception e)
{
Assert.IsInstanceOfType(e, typeof(ArgumentException));
}
// Clean up the file
Data.File removeFile = repo.GetFiles().Where(f => f.UniqueName == "NonUniqueFileNameTest").FirstOrDefault();
repo.RemoveFile(removeFile);
The test fails. When I step in to trace the problem, I found out that when I do the _session.flush() right after _session.delete(), it throws the exception, and if I look at the sql it does, it is actually submitting a "INSERT INTO" statement, which is exactly the sql that cause UNIQUE CONSTRAINT error. I tried to encapsulate both in transaction but still same problem happens. Anyone know the reason?
Edit
The other stay the same, only added Evict as suggested
public void AddFile<TFileType>(TFileType FileToAdd) where TFileType : File
{
try
{
_session.Save(FileToAdd);
_session.Flush();
}
catch (Exception e)
{
_session.Evict(FileToAdd);
if (e.InnerException.Message.Contains("Violation of UNIQUE KEY"))
throw new ArgumentException("Unique Name must be unique");
else
throw e;
}
}
No difference to the result.
Call _session.Evict(FileToAdd) in the catch block. Although the save fails, FileToAdd is still a transient object in the session and NH will attempt to persist (insert) it the next time the session is flushed.
NHibernate Manual "Best practices" Chapter 22:
This is more of a necessary practice than a "best" practice. When
an exception occurs, roll back the ITransaction and close the ISession.
If you don't, NHibernate can't guarantee that in-memory state
accurately represents persistent state. As a special case of this,
do not use ISession.Load() to determine if an instance with the given
identifier exists on the database; use Get() or a query instead.
I've a line in my Lucene code:
try
{
searcher.GetIndexReader();
}
catch(Exception ex)
{
throw ex;
}
finally
{
if (searcher != null)
{
searcher.Close();
}
}
In my finally clause, when I execute searcher.Close(), will it also execute searcher.GetIndexReader().Close behind the scenes?
Or do I need to explicitly call searcher.GetIndexReader().Close() method to close IndexReader??
Thanks for reading.
Sorry, it's tough to understand what is the type of searcher in your snippet and how it was constructed. But you should not close the the index reader with searcher.GetIndexReader().Close(). The searcher.Close() will close all resources associated with it and index reader as well IF searcher IS NOT IndexSearcher instance constructed with IndexSearcher(IndexReader r). In that case you have to close index reader manually.
First of all, code like this is always a bad idea:
try {
// Do something
} catch(Exception ex) {
throw ex; // WRONG
}
You're just masking the source of the exception, and still throwing. Better just to delete those lines.
If you didn't create the IndexReader yourself, you don't need to close it yourself. Chances are high that you don't need to use the method getIndexReader at all.
Also, unless you're assigning to searcher within the try block, there's no need to check if it's null, since there's no way it could get a null value.
Here's an example of what your code should look like:
String myIndexDir = "/my/index/dir";
IndexSearcher searcher = new IndexSearcher(myIndexDir);
try {
// Do something with the searcher
} finally {
searcher.close();
}
I have this code, running parallel in two separate threads. It works fine for a few times, but at some random point it throws InvalidOperationException:
The transaction is either not associated with the current connection or has been completed.
At the point of exception, I am looking inside the transaction with visual studio and verify its connection is set normally. Also command.Transaction._internalTransaction. _transactionState is set to Active and IsZombied property is set to false.
This is a test application and I am using Thread.Sleep for creating longer transactions and causing overlaps.
Why may the exception being thrown and what can I do about it?
IDbCommand command = new SqlCommand("Select * From INFO");
IDbConnection connection = new SqlConnection(connectionString);
command.Connection = connection;
IDbTransaction transaction = null;
try
{
connection.Open();
transaction = connection.BeginTransaction();
command.Transaction = transaction;
command.ExecuteNonQuery(); // Sometimes throws exception
Thread.Sleep(forawhile); // For overlapping transactions running in parallel
transaction.Commit();
}
catch (ApplicationException exception)
{
if (transaction != null)
{
transaction.Rollback();
}
}
finally
{
connection.Close();
}
Found the solution. Turns out calling this
command.Connection = connection;
does not mean that you set the connection to the command. Just after calling this, I checked the results of
command.Connection.GetHashCode();
command.GetHashCode();
and they were not equal. Refacotored the code to use connection.CreateCommand and the problem solved.