How to specify alternate datasource when doing raw SQL queries in Grails 2.3.x? - sql

We have a reporting read only database clone set up as an alternate datasource in our Grails application named 'reporting'. This works great when using dynamic finders or criteria as per the grails MyDomain.reporting.findByXXXX(..etc..)
However there are some nasty queries that have to be done in raw SQL. Our current way of doing this (in a service) is
def sessionFactory;
public static List getSomeBigNastyData(...)
{
sessionFactory.currentSession.createSQLQuery(
"""
Big Ugly Query
"""
).list();
}
But this does not go to the reporting database and there doesn't seem to be a way of specifying 'reporting' - is there a way to specify the datasource to execute raw SQL against?

It's possible to use the dataSource as an injected bean and groovy.sql.Sql to run your queries. Below is a simple example of a service that will use your data source and allow you to run a query against it.
package com.example
import groovy.sql.GroovyRowResult
import groovy.sql.Sql
class ExampleSqlService {
def dataSource_reporting // your named data source
List<GroovyRowResult> query(String sql) {
def db = new Sql(dataSource_reporting)
return db.rows(sql)
}
}
Using a service (like the above example) allows you to access it from basically anywhere (Controller, Service, TagLib, Domain, etc.)

Related

Multiple mongoDB related to same django rest framework project

We are having one django rest framework (DRF) project which should have multiple databases (mongoDB).Each databases should be independed. We are able to connect to one database, but when we are going to another DB for writing connection is happening but data is storing in DB which is first connected.
We changed default DB and everything but no changes.
(Note : Solution should be apt for the usage of serializer. Because we need to use DynamicDocumentSerializer in DRF-mongoengine.
Thanks in advance.
While running connect() just assign an alias for each of your databases and then for each Document specify a db_alias parameter in meta that points to a specific database alias:
settings.py:
from mongoengine import connect
connect(
alias='user-db',
db='test',
username='user',
password='12345',
host='mongodb://admin:qwerty#localhost/production'
)
connect(
alias='book-db'
db='test',
username='user',
password='12345',
host='mongodb://admin:qwerty#localhost/production'
)
models.py:
from mongoengine import Document
class User(Document):
name = StringField()
meta = {'db_alias': 'user-db'}
class Book(Document):
name = StringField()
meta = {'db_alias': 'book-db'}
I guess, I finally get what you need.
What you could do is write a really simple middleware that maps your url schema to the database:
from mongoengine import *
class DBSwitchMiddleware:
"""
This middleware is supposed to switch the database depending on request URL.
"""
def __init__(self, get_response):
# list all the mongoengine Documents in your project
import models
self.documents = [item for in dir(models) if isinstance(item, Document)]
def __call__(self, request):
# depending on the URL, switch documents to appropriate database
if request.path.startswith('/main/project1'):
for document in self.documents:
document.cls._meta['db_alias'] = 'db1'
elif request.path.startswith('/main/project2'):
for document in self.documents:
document.cls._meta['db_alias'] = 'db2'
# delegate handling the rest of response to your views
response = get_response(request)
return response
Note that this solution might be prone to race conditions. We're modifying a Documents globally here, so if one request was started and then in the middle of its execution a second request is handled by the same python interpreter, it will overwrite document.cls._meta['db_alias'] setting and first request will start writing to the same database, which will break your database horribly.
Same python interpreter is used by 2 request handlers, if you're using multithreading. So with this solution you can't start your server with multiple threads, only with multiple processes.
To address the threading issues, you can use threading.local(). If you prefer context manager approach, there's also a contextvars module.

GemFire getRegion() returns null whereas OQL query gives result

I am using Pivotal GemFire 9.0.0 with 1 Locator and 1 Server. The Server has a Region called "submissions", like below -
<gfe:replicated-region id="submissionsRegion" name="submissions"
statistics="true" template="replicateRegionTemplate">
...
</gfe:replicated-region>
I am getting Region as null when executing the following code -
Region<K, V> region = clientCache.getRegion("submissions");
Surprisingly, the same ClientCache returns all the records when I query using OQL and QueryService as shown below -
String queryString = "SELECT * FROM /submissions";
QueryService queryService = clientCache.getQueryService();
Query query = queryService.newQuery(queryString);
SelectResults results = (SelectResults) query.execute();
I am initializing my ClientCache like this -
ClientCache clientCache = new ClientCacheFactory()
.addPoolLocator("localhost", 10479)
.set("name", "MyClientCache")
.set("log-level", "error")
.create();
I am really baffled by this. Any pointer or help would be great.
You need to configure your ClientCache (either through a cache.xml or pure GemFire API) with the regions as well. Using your example:
ClientRegionFactory regionFactory = clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY);
Region region = regionFactory.create("submissions");
The ClientRegionShortcut.PROXY is used just for the sake of simplicity, you should use the shortcut that meets your needs.
The OQL works as expected because you are obtaining the QueryService through the ClientCache.getQueryService() method (instead of ClientCache.getLocalQueryService()), so the query is actually executed on Server Side.
You can get more information about how to configure the Client/Server topology in
Client/Server Configuration.
Hope this helps.
Cheers.
Yes, you need to "define" the corresponding client-side Region, matching the server-side REPLICATE Region by name (i.e. "submissions"). Actually this is a requirement independent of the server Regions' DataPolicy type (e.g. REPLICATE or PARTITION).
This is necessary since not every client wants to know about or even needs have data/events from every possible server Region. Of course, this is also configurable through subscription and "Interests Registration" (with Client/Server Event Messaging, or alternatively, CQs).
Anyway, you can completely avoid the use of the GemFire API directly or even GemFire's native cache.xml (highly recommend avoiding) by using either SDG's XML namespace...
<gfe:client-cache properties-ref="gemfireProperties" ... />
<gfe:client-region id="submissions" shortcut="PROXY"/>
Or by using Spring JavaConfig with SDG's API...
#Configuration
class GemFireConfiguration {
Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty("log-level", "config");
...
return gemfireProperties;
}
#Bean
ClientCacheFactoryBean gemfireCache() {
ClientCacheFactoryBean gemfireCache = new ClientCacheFactoryBean();
gemfireCache.setClose(true);
gemfireCache.setProperties(gemfireProperties());
...
return gemfireCache;
}
#Bean(name = "submissions");
ClientRegionFactoryBean submissionsRegion(GemFireCache gemfireCache) {
ClientRegionFactoryBean submissions = new ClientRegionFactoryBean();
submissions.setCache(gemfireCache);
submissions.setClose(false);
submissions.setShortcut(ClientRegionShortcut.PROXY);
...
return submissions;
}
...
}
The "submissions" Region can be wrapped with SDG's GemfireTemplate, which will handle getting the "correct" QueryService on your behalf when running queries using the find(..) method.
Of course, you may be interested in making your client "submissions" Region a CACHING_PROXY" too. Of course, you will then need to register "interests" in the keys or data of interests. CQs are the best way to do this as it uses query criteria to define the data of "interests".
CACHING_PROXY is exactly as it sounds, caching data locally in the client based on the interests policies. This also gives you the ability to use the "local" QueryService to query data locally, avoiding the network hop.
Anyway, many options here.
Cheers,
John

Grails - Recreate database schema for integration test

Is there a convenient way to force Grails / Hibernate to recreate the database schema from an integration test?
If you add the following in DataSource.groovy an empty database will be created before the integration tests are run:
environments {
test {
dataSource {
dbCreate = "create"
}
}
}
By default each integration test executes within a transaction that is rolled-back at the end of the test, so unless you're not using this default behaviour there shouldn't be any need to programatically recreate the database.
Update
Based on your comment, it seems you really do want to recreate the schema before some integration tests. In that case, the only way I can think of, is to run
drop and recreate the schema
use grails schema-export to import a fresh schema
class MyIntegrationTest {
SessionFactory sessionFactory
/**
* Helper for executing SQL statements
* #param jdbcWork A closure that is passed an <tt>Sql</tt> object that is used to execute the JDBC statements
*/
private doJdbcWork(Closure jdbcWork) {
sessionFactory.currentSession.doWork(
new Work() {
#Override
void execute(Connection connection) throws SQLException {
// do not close this Sql instance ourselves
Sql sql = new Sql(connection)
jdbcWork(sql)
}
}
)
}
private recreateSchema() {
doJdbcWork {Sql sql ->
// use the sql object to drop the database and create a new blank database
// something like the following might work for MySQL
sql.execute("drop database my-schema")
sql.execute("create database my-schema")
}
// generate the DDL and import it
// there must be a better way to execute a grails command from within an
// integration test, but unfortunately I don't know what it is
'grails test schema-export export'.execute()
}
#Test
void myTestMethod() {
recreateSchema()
// now do the test
}
}
First and foremost, this code is completely untested, so treat with deep suspicion and low expectations. Secondly, you may need to change the default transational behaviour of integration tests (with #Transactional) in order for this to work.
This seems to work fine, but it's obviously very tightly coupled to H2 so it would have been nice if the Hibernate plugin had exposed an api to take care of this.
http://h2database.com/html/grammar.html#script
class SomethingTestingTransactionsSpec extends IntegrationSpec {
static transactional = false // Why I need this
SessionFactory sessionFactory // Injected by Spring
DataSource dataSource // Also injected
File schemaDump
Sql sql
void setup() {
sql = new Sql(dataSource)
schemaDump = File.createTempFile("test-database-dump", ".sql") // Java 7 API
sql.execute("script drop to ${schemaDump.absolutePath}")
}
void cleanup() {
sql.execute("runscript from ${schemaDump.absolutePath}")
sessionFactory.currentSession.clear()
schemaDump.delete()
}
// Spock tests ...
}
It should be trivial to extract this code into a bean registered only for test environments. That should clean up the test code a bit and improve efficiency by only having to dump the schema once.
Well, you have access to executing arbitrary sql via sessionFactory, so you could call a grails schema export at the beginning of your tests and then just re-import the schema into your DB when needed.
Alternatively, I wonder if calling database migration plugin externally will accomplish the same.
Or you can trick grails into thinking your domain class has changed and force a reload via https://github.com/grails/grails-core/blob/v2.1.1/grails-hibernate/src/main/groovy/org/codehaus/groovy/grails/plugins/orm/hibernate/HibernatePluginSupport.groovy#L340 ( don't ask me how )

If I use groovy sql class in grails, does it use the grails connection pooling?

From the examples below in the sql documentation. If I use either of these ways to create a sql instance in the middle of a grails service class, will it use the grails connection pooling? Will it participate in any transaction capabilities? Do I need to close the connection myself? Or will it automatically go back into the pool?
def db = [url:'jdbc:hsqldb:mem:testDB', user:'sa', password:'', driver:'org.hsqldb.jdbcDriver']
def sql = Sql.newInstance(db.url, db.user, db.password, db.driver)
or if you have an existing connection (perhaps from a connection pool) or a datasource use one of the constructors:
def sql = new Sql(datasource)
Now you can invoke sql, e.g. to create a table:
sql.execute '''
create table PROJECT (
id integer not null,
name varchar(50),
url varchar(100),
)
'''
If you execute:
Sql.newInstance(...)
You will create a new connection and you aren't using the Connection Pool.
If you want to use the connection pool, you can create a Service with the following command:
grails create-service org.foo.MyService
Then, in your MyService.groovy file, you can manage transactions as follows:
import javax.annotation.PostConstruct
class MyService {
def dataSource // inject the datasource
static transactional = true // tell groovy that the service methods will be transactional
def doSomething() {
sql = new Sql(dataSource)
//rest of your code
}
}
For more details you can read: http://grails.org/doc/2.0.x/guide/services.html
EDIT:
For manage multiple datasources you can do one of the following based on your Grails version.
If you are using a Grails version greater than 1.1.1 (not 2.x) you can use the following plugin:
http://grails.org/plugin/datasources
If you are using Grails 2.x you can use the out of the box support:
http://grails.org/doc/2.0.0.RC1/guide/conf.html#multipleDatasources
If you create the Sql object like this I believe it will use connection pooling
class SomeSerive {
SessionFactory sessionFactory
def someMethod() {
Sql sql = new Sql(sessionFactory.currentSession.connection())
}
}

Grails - store sql that will be used by services

I am writing a Grails application that will mostly be using the springws web services plugin with endpoints backed by services. The services will retrieve data from a variety of back end databases (i.e., not via domain classes and GORM). I would like to store the sql that my services will be using to fetch the data for the web services in external files.
I'm looking for suggestions on:
Where is the best place to keep the files (i.e., I'd like to put them somewhere obvious like grails-app/sql) and best format (i.e., xml, configslurper, etc.)
Best way to abstract the retrieving of the sql text so my services that will execute the sql will not need to know where or how they are fetched. Services will just provide a sqlid and get the sql.
I was working on a project recently where I needed to do something similar. I created the following directory to store the sql files:
./grails-app/conf/sql
For example there is a file ./grails-app/conf/sql/hr/FIND_PERSON_BY_ID.sql that has something like the following:
select a.id
, a.first_name
, a.last_name
from person
where id = ?
I created a SqlCatalogService class that would load all files in that directory (and subdirectories) and store the filenames (minus extension) and file text in a Map. The service has a get(id) method that returns the sql text that is cached in the Map. Since files/directories stored in grails-app/conf are placed in the classpath, the SqlCatalogService uses the following code to read in the files:
....
....
Map<String,String> sqlCache = [:]
....
....
void loadSqlCache() {
try {
loadSqlCacheFromDirectory(new File(this.class.getResource("/sql/").getFile()))
} catch (Exception ex) {
log.error(ex)
}
}
void loadSqlCacheFromDirectory(File directory) {
log.info "Loading SQL cache from disk using base directory ${directory.name}"
synchronized(sqlCache) {
if(sqlCache.size() == 0) {
try {
directory.eachFileRecurse { sqlFile ->
if(sqlFile.isFile() && sqlFile.name.toUpperCase().endsWith(".SQL")) {
def sqlKey = sqlFile.name.toUpperCase()[0..-5]
sqlCache[sqlKey] = sqlFile.text
log.debug "added SQL [${sqlKey}] to cache"
}
}
} catch (Exception ex) {
log.error(ex)
}
} else {
log.warn "request to load sql cache and cache not empty: size [${sqlCache.size()}]"
}
}
}
String get(String sqlId) {
def sqlKey = sqlId?.toUpperCase()
log.debug "SQL Id requested: ${sqlKey}"
if(!sqlCache[sqlKey]) {
log.debug "SQL [${sqlKey}] not found in cache, loading cache from disk"
loadSqlCache()
}
return sqlCache[sqlKey]
}
Services that use various datasources use the SqlCatalogService to retrieve the sql by calling the get(id) method:
class PersonService {
def hrDataSource
def sqlCatalogService
private static final String SQL_FIND_PERSON_BY_ID = "FIND_PERSON_BY_ID"
Person findPersonById(String personId) {
try {
def sql = new groovy.sql.Sql(hrDataSource)
def row = sql.firstRow(sqlCatalogService.get(SQL_FIND_PERSON_BY_ID), [personId])
row ? new Person(row) : null
} catch (Exception ex) {
log.error ex.message, ex
throw ex
}
}
}
For now we only have a few sql statements so storing all the text in a Map is not an issue. If you lots of sql files to store you may need to think about using something like Ehcache and defining an eviction strategy (i.e., least recently used or least frequently used) and only storing the most used in memory and leaving the rest on disk until needed.
Before doing this I thought about using GORM and storing the sql text in the database. But decided that having the sql in files made it easier to develop with since we could pretty much save the sql to file directly from our sql tool (replacing hard-code params with question marks) and are able to let our revision control system track the changes. I'm not saying the above service is the most efficient or correct way to handle this, but it's worked so far for our needs.
Have you considered using Grails GORM and a HSQLDB database to store the SQL you want executed? You could then put in a record for each service containing that services SQL and retrieve it using normal Grails GORM functions. You could generate a default set of controllers and views that would allow you to edit the SQL. If you want to store the SQL in external files you can create a sub directory in the web-app directory called sql, then store your SQL statements as text files. You could create a class that would take a service name, load the associated text file containing the SQL and return the contents of that file. With out knowing how complex your SQL will be I cant' say what the best format would be. If your dealing with normal select statements with no parameter substitution plain text would be best. If your dealing with more complex SQL with substitutions and multiple queries you may want to use XML.