dataSource injection in a Grails service - sql

I have a service with application scope, not transactional.
I have a service method which:
uses the injected dataSource to create a stored procedure call [using Sql.call{...}]. Executes and traverse the resultset.
Based on the resultset, I subdivide the resultsets into equal sizes chunks and process them in multiple threads.
Each thread tries to do Sql sql = new Sql(dataSource)
Here a deadlock occurs.
Why is that? Does dataSource not return a possibly new or an idle connection?

Try to look into Gpars : It's a groovy parallalization framework.

I run into exactly the same issue. After hours of searching i've found the solution.
In your Datasource.groovy configfile you are able to set the parameters for the connection pooling to the database.
I've changed the minIdle, maxIdle and maxActive settings of the http://commons.apache.org/proper/commons-dbcp/apidocs/org/apache/commons/dbcp/BasicDataSource.html so that my configfile looks something like this:
dataSource {
url = "jdbc:mysql://127.0.0.1/sipsy_dev?autoReconnect=true&zeroDateTimeBehavior=convertToNull"
driverClassName = "com.mysql.jdbc.Driver"
username = "sipsy_dev"
password = "sipsy_dev"
pooled = true
properties {
minEvictableIdleTimeMillis=1800000
timeBetweenEvictionRunsMillis=1800000
numTestsPerEvictionRun=3
testOnBorrow=true
testWhileIdle=true
testOnReturn=true
minIdle=100
maxIdle=250
maxActive=500
validationQuery="SELECT 1"
}
dialect = 'org.hibernate.dialect.MySQL5InnoDBDialect'
}

When you are not in a transaction, you have to release the connection that GroovySQL picks up from the datasource. The pool runs out of connections and that's why it locks up.
Inside a transaction TransactionAwareDataSourceProxy will take care of sharing the connection and therefore releasing the connection from GroovySQL isn't required in that case. See http://jira.grails.org/browse/GRAILS-5454 for details.
This is a better way to use GroovySQL in Grails since the OpenSessionInView (OSIV) interceptor will take care of closing the connection and it will share the same database connection as Hibernate. This method works in both cases: inside transactions and outside transactions.
Sql sql = new Sql(sessionFactory.currentSession.connection())

Related

Do we need to close DBCPConnectionPool object in Custom Processor Or Is it handled by Controller Service itself?

I have created a Custom processor which take care of saving some records in mysql database. For setting up mysql database i am using DBCPConnectionPool object in my custom processor which does work of saving data to database tables correctly, But i am worried of pooling mechanism i am not closing this connection after my logic of saving is completed. This is working for 2 to 3 flowfiles but when i send multiple flowfile will it work correctly?
DBCPService dbcpService = context.getProperty(DBCP_SERVICE).asControllerService(DBCPService.class);
Connection con = dbcpService.getConnection();
I am looking for clarification as my currently flow is working correctly with less number of flowfile
You should be returning it to the pool, most likely with a try-with-resource:
try (final Connection con = dbcpService.getConnection();
final PreparedStatement st = con.prepareStatement(selectQuery)) {
}
You can always consult the standard processors to see what they do:
https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractExecuteSQL.java#L223

GemFire getRegion() returns null whereas OQL query gives result

I am using Pivotal GemFire 9.0.0 with 1 Locator and 1 Server. The Server has a Region called "submissions", like below -
<gfe:replicated-region id="submissionsRegion" name="submissions"
statistics="true" template="replicateRegionTemplate">
...
</gfe:replicated-region>
I am getting Region as null when executing the following code -
Region<K, V> region = clientCache.getRegion("submissions");
Surprisingly, the same ClientCache returns all the records when I query using OQL and QueryService as shown below -
String queryString = "SELECT * FROM /submissions";
QueryService queryService = clientCache.getQueryService();
Query query = queryService.newQuery(queryString);
SelectResults results = (SelectResults) query.execute();
I am initializing my ClientCache like this -
ClientCache clientCache = new ClientCacheFactory()
.addPoolLocator("localhost", 10479)
.set("name", "MyClientCache")
.set("log-level", "error")
.create();
I am really baffled by this. Any pointer or help would be great.
You need to configure your ClientCache (either through a cache.xml or pure GemFire API) with the regions as well. Using your example:
ClientRegionFactory regionFactory = clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY);
Region region = regionFactory.create("submissions");
The ClientRegionShortcut.PROXY is used just for the sake of simplicity, you should use the shortcut that meets your needs.
The OQL works as expected because you are obtaining the QueryService through the ClientCache.getQueryService() method (instead of ClientCache.getLocalQueryService()), so the query is actually executed on Server Side.
You can get more information about how to configure the Client/Server topology in
Client/Server Configuration.
Hope this helps.
Cheers.
Yes, you need to "define" the corresponding client-side Region, matching the server-side REPLICATE Region by name (i.e. "submissions"). Actually this is a requirement independent of the server Regions' DataPolicy type (e.g. REPLICATE or PARTITION).
This is necessary since not every client wants to know about or even needs have data/events from every possible server Region. Of course, this is also configurable through subscription and "Interests Registration" (with Client/Server Event Messaging, or alternatively, CQs).
Anyway, you can completely avoid the use of the GemFire API directly or even GemFire's native cache.xml (highly recommend avoiding) by using either SDG's XML namespace...
<gfe:client-cache properties-ref="gemfireProperties" ... />
<gfe:client-region id="submissions" shortcut="PROXY"/>
Or by using Spring JavaConfig with SDG's API...
#Configuration
class GemFireConfiguration {
Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty("log-level", "config");
...
return gemfireProperties;
}
#Bean
ClientCacheFactoryBean gemfireCache() {
ClientCacheFactoryBean gemfireCache = new ClientCacheFactoryBean();
gemfireCache.setClose(true);
gemfireCache.setProperties(gemfireProperties());
...
return gemfireCache;
}
#Bean(name = "submissions");
ClientRegionFactoryBean submissionsRegion(GemFireCache gemfireCache) {
ClientRegionFactoryBean submissions = new ClientRegionFactoryBean();
submissions.setCache(gemfireCache);
submissions.setClose(false);
submissions.setShortcut(ClientRegionShortcut.PROXY);
...
return submissions;
}
...
}
The "submissions" Region can be wrapped with SDG's GemfireTemplate, which will handle getting the "correct" QueryService on your behalf when running queries using the find(..) method.
Of course, you may be interested in making your client "submissions" Region a CACHING_PROXY" too. Of course, you will then need to register "interests" in the keys or data of interests. CQs are the best way to do this as it uses query criteria to define the data of "interests".
CACHING_PROXY is exactly as it sounds, caching data locally in the client based on the interests policies. This also gives you the ability to use the "local" QueryService to query data locally, avoiding the network hop.
Anyway, many options here.
Cheers,
John

this command is not available unless the connection is created with admin-commands enabled

When trying to run the following in Redis using booksleeve.
using (var conn = new RedisConnection(server, port, -1, password))
{
var result = conn.Server.FlushDb(0);
result.Wait();
}
I get an error saying:
This command is not available unless the connection is created with
admin-commands enabled"
I am not sure how do i execute commands as admin? Do I need to create an a/c in db with admin access and login with that?
Updated answer for StackExchange.Redis:
var conn = ConnectionMultiplexer.Connect("localhost,allowAdmin=true");
Note also that the object created here should be created once per application and shared as a global singleton, per Marc:
Because the ConnectionMultiplexer does a lot, it is designed to be
shared and reused between callers. You should not create a
ConnectionMultiplexer per operation. It is fully thread-safe and ready
for this usage.
Basically, the dangerous commands that you don't need in routine operations, but which can cause lots of problems if used inappropriately (i.e. the equivalent of drop database in tsql, since your example is FlushDb) are protected by a "yes, I meant to do that..." flag:
using (var conn = new RedisConnection(server, port, -1, password,
allowAdmin: true)) <==== here
I will improve the error message to make this very clear and explicit.
You can also set this in C# when you're creating your multiplexer - set AllowAdmin = true
private ConnectionMultiplexer GetConnectionMultiplexer()
{
var options = ConfigurationOptions.Parse("localhost:6379");
options.ConnectRetry = 5;
options.AllowAdmin = true;
return ConnectionMultiplexer.Connect(options);
}
For those who like me faced the error:
StackExchange.Redis.RedisCommandException: This operation is not
available unless admin mode is enabled: ROLE
after upgrading StackExchange.Redis to version 2.2.4 with Sentinel connection: it's a known bug, the workaround was either to downgrade the client back or to add allowAdmin=true to the connection string and wait for the fix.
Starting from 2.2.50 public release the issue is fixed.

Simpleroleprovider causing remote transaction inside transactionscope

I am in the process of upgrading asp.net membership to the new simplemembership provider in MVC4. This is an Azure/Sql Azure app which runs fine on localhost but fails when deployed. I have code in a transaction as follows:
TransactionOptions toptions = new TransactionOptions();
toptions.IsolationLevel = System.Transactions.IsolationLevel.Serializable;
using (TransactionScope trans = new TransactionScope(TransactionScopeOption.Required, toptions))
{
try
{
... do a bunch of database stuff in a single dbContext ...
var roleprov = (SimpleRoleProvider)Roles.Provider;
string[] roles = roleprov.GetRolesForUser(Username);
// above line fails with The transaction manager has disabled its support for remote/network transactions. (Exception from HRESULT: 0x8004D024)
}
}
I am using this technique to populate the Roles classes. The stack trace seems to indicate that it is indeed trying to fire off a sub-transaction to complete that call. The simplemembership tables are in a different db. How can I retrieve role info from the role provider inside the context of a separate transaction?
The problem is that GetRolesForUser causes a new connection to open to a second database, and that in turn picks up that it is in a TransactionScope. In turn this (MSDN - System.Transactions Integration with SQL Server) then promotes to the DTC. You could try a few options:
Get roles before the transaction starts
You could retrieve string[] roles outside your TransactionScope. Is there a reason you need to get them inside the scope? Given that you say:
How can I retrieve role info from the role provider inside the context of a separate transaction
it sounds like you could get the role info before the TransactionScope and have no problems.
Turn off transactions on the simple membership connection string
You can tell a connection string not to take part in transactions by putting "enlist=false" (see SqlConnection.ConnectionString) in the connection string, so this might be one option for you if you never need transactions on the database you use for Simple Membership.
Try opening the Simple Membership connection before the transaction
For SimpleRoleProvider it creates it's database object, and then opens it the first time it uses it. But, it doesn't close it until .... Scratch that, the connection is opened on each call to GetRolesForUser so you are out of luck. I was thinking you could call GetRolesForUser once before TransactionScope is opened, and then again inside the scope using the already open connection - you can't.
Play with the IObjectContextAdapter
Disclaimer: I can't promise this will work as I can't test with your setup.
You can play tricks to prevent promotion with two connection strings by opening the non-transaction connection string outside the transaction scope first, and then the transaction shouldn't be promoted. This can also be used if you cause the same connection to Close and then Open inside the same transaction scope (which would otherwise cause promotion).
You could try this with your context, and see if that stopped the GetRolesForUser promoting the transaction, but I doubt that would work as GetRolesForUser causes the connection to open if it isn't already. As I can't test in your scenario, I will include it in case it helps.
using (var db = new ExampleContext())
{
var adapter = db as System.Data.Entity.Infrastructure.IObjectContextAdapter;
using (var conn = adapter.ObjectContext.Connection)
{
conn.Open();
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required))
{
// perform operations
db.SaveChanges();
// perform more operations
db.SaveChanges();
// perform even more operations
db.SaveChanges();
// If you don't complete, the transaction won't commit and you will lose the changes
scope.Complete();
}
}
}

SQL Azure - Transient "ExecuteReader requires an open connection" exception

I'm using SQL Azure in a Windows Azure app running as a cloud service. Most of the time my database actions works completely fine (that is, after handling all sorts of timeouts and what not), however i'm running into a problem that seems
using (var connection = new SqlConnection(m_connectionString))
{
m_ConnectionRetryPolicy.ExecuteAction(() => connection.Open());
using (var command = connection.CreateCommand())
{
command.CommandText = "SELECT * FROM X WHERE Y = Z";
var reader = m_CommandRetryPolicy.ExecuteAction(() => command.ExecuteReader());
return LoadData(reader).FirstOrDefault();
}
}
The line that fails is the Command.ExecuteReader with an:
ExecuteReader requires an open and available Connection. The connection's current state is closed
Things that i have already considered
I'm not "reusing" an old connection or saving a connection is a member variable
There should be no concurrency issues - the repository class that these methods belong to is created each time it is needed
Have anyone else experienced this? I could of course just add this to the list of exception which would yield a retry, but I'm not very comfortable with that as
I had a bunch of these errors a few days ago (West Europe) on my production deployment, but they went away by themselves. At the same time I was seeing timeouts, throttling and other errors from SQL Azure. I assume that there was a temporary problem with the platform (or at least the server that I am running on).
You probably aren't doing anything wrong in your code, but are suffering from degraded performance on SQL Azure. Try and handle the errors, perform retries, exponential back-off, queues (to reduce concurrency), splitting your load across databases — that sort of thing.
write every thing within try and catch,finally block.
as follows:
try
{
con.open();
m_ConnectionRetryPolicy.ExecuteAction(() => connection.Open());
using (var command = connection.CreateCommand())
{
command.CommandText = "SELECT * FROM X WHERE Y = Z";
var reader = m_CommandRetryPolicy.ExecuteAction(() => command.ExecuteReader());
return LoadData(reader).FirstOrDefault();
}
con.close();
}
catch(exception ex)
{
}
finally
{
con.close();
}
Remember to close connection in finally block as well.
There is an Enterprise Library that MS has produced specifically for SQL Azure, here are some examples from their patterns and Practice.
It's similar to what you are doing, however it does more on the reliability (and these examples show how to get a reliable connection)
http://msdn.microsoft.com/en-us/library/hh680899(v=pandp.50).aspx
Are you sure it's the reader that's failing and not the opening of the connection? I'm encountering an exception when I wrap the connection.Open() in the m_ConnectionRetryPolicy.ExecuteAction().
However it works just fine for me if I skip the ExecuteAction wrapper and open the connection using connection.OpenWithRetry(m_ConnectionRetryPolicy).
And I'm also using command.ExecuteReaderWithRetry(m_ConnectionRetryPolicy) which is working for me.
I have no idea though why it's not working when wrapped in ExecuteAction though.
I believe this means that Azure has closed the connection behind the scenes, without telling the connection pooler. This is by design. So, the connection pooler gives you what it thinks is an available, open connection, but when you try to use it, it finds out it's not open after all.
This seems very clunky to me, but it's the way Azure is at the moment.