Spring batch tables in a different schema - schema

I want to use a different schema to save Spring Batch tables. I can see that my new datasource in set in the JobRepositoryFactoryBean. But still the tables are been created in the other shcema where I have business tables. I read soemwhere that I can use dataSource.setValidationQuery to alter the schema, but still doesn't work. I can solve this. Below is the JobRepositoryFactoryBean and Datasource prop.
#Bean
#Qualifier("batchDataSource")
protected JobRepository createJobRepository() throws Exception {
JobRepositoryFactoryBean factory = createJobRepositoryFactoryBean();
factory.setDataSource(getDataSource());
if (getDbType() != null) {
factory.setDatabaseType(getDbType());
}
factory.setTransactionManager(getTransactionManager());
factory.setIsolationLevelForCreate(getIsolationLevel());
factory.setMaxVarCharLength(maxVarCharLength);
factory.setTablePrefix(getTablePrefix());
factory.setValidateTransactionState(validateTransactionState);
factory.afterPropertiesSet();
return factory.getObject();
}
spring.datasource.url=url
spring.datasource.username=username
spring.datasource.password=pwd
spring.datasource.driver-class-name:oracle.jdbc.driver.OracleDriver
spring.datasource.validation-query=ALTER SESSION SET
CURRENT_SCHEMA=schemaname
#batch setting
spring.batch.datasource.url=burl
spring.batch.datasource.username=busername
spring.batch.datasource.password=bpwd
spring.batch.datasource.driver-class-name:oracle.jdbc.driver.OracleDriver
spring.batch.datasource.validation-query=ALTER SESSION SET
CURRENT_SCHEMA=batchschema
org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
dataSource.setName("batchDataSourceName");
dataSource.setDriverClassName(batchDataSourceProperties.getDriverClassName());
dataSource.setUrl(batchDataSourceProperties.getUrl());
dataSource.setUsername(batchDataSourceProperties.getUsername());
dataSource.setPassword(batchDataSourceProperties.getPassword());
// dataSource.setValidationQuery(batchDataSourceProperties.getValidationQuery());

Below property in application.properties is working for me.This will create meta schema tables under new_schema in your DB.
spring.batch.tablePrefix=new_schema.BATCH_
Below is the version of springBoot I am using.
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.3.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>

When using Spring Batch's #EnableBatchProcessing, the DataSource used by the Spring Batch tables is the one provided by the BatchConfigurer. If you are using more than one DataSource in your application, you must create your own BatchConfigurer (either by extending DefaultBatchConfigurer or implementing the interface) so that Spring Batch knows which to use. You can read more about this customization in the reference documentation here: https://docs.spring.io/spring-batch/4.0.x/reference/html/job.html#configuringJobRepository

Duplicate your existing data source properties and override BatchConfigurer to return this new data source. Then, in the new data source's properties, change either
The user connecting to the database to one with a default schema defined as the desired schema for the Spring Batch tables
The connection url to include the desired schema for the Spring Batch tables.
The option you choose will depend on your database type as follows:
For SQL Server you can define the default schema for the user you are using to connect to the database (I did this one).
CREATE SCHEMA batchschema;
USE database;
CREATE USER batchuser;
GRANT CREATE TABLE TO batchuser;
ALTER USER batchuser WITH DEFAULT_SCHEMA = batchschema;
ALTER AUTHORIZATION ON SCHEMA::batchschema TO batchuser;
For Postgres 9.4 you can specify schema in the connection URL using currentSchema parameter: jdbc:postgresql://host:port/db?currentSchema=batch
For Postgres before 9.4 you can specify schema in the connection URL using searchpath parameter: jdbc:postgresql://host:port/db?searchpath=batch
For Oracle it looks like the schema would need to be set on the session. I'm not exactly sure how this one would work...
ALTER SESSION SET CURRENT_SCHEMA batchschema
Qualify each DataSource, set one you wish to use for the Batch tables as #Primary, and set your datasource for the DefaultBatchConfigurer as follows:
#Bean(name="otherDataSource")
public DataSource otherDataSource() {
//...
}
#Primary
#Bean(name="batchDataSource")
public DataSource batchDataSource() {
//...
}
#Bean
BatchConfigurer configurer(#Qualifier("batchDataSource") DataSource dataSource){
return new DefaultBatchConfigurer(dataSource);
}

Related

How to specify alternate datasource when doing raw SQL queries in Grails 2.3.x?

We have a reporting read only database clone set up as an alternate datasource in our Grails application named 'reporting'. This works great when using dynamic finders or criteria as per the grails MyDomain.reporting.findByXXXX(..etc..)
However there are some nasty queries that have to be done in raw SQL. Our current way of doing this (in a service) is
def sessionFactory;
public static List getSomeBigNastyData(...)
{
sessionFactory.currentSession.createSQLQuery(
"""
Big Ugly Query
"""
).list();
}
But this does not go to the reporting database and there doesn't seem to be a way of specifying 'reporting' - is there a way to specify the datasource to execute raw SQL against?
It's possible to use the dataSource as an injected bean and groovy.sql.Sql to run your queries. Below is a simple example of a service that will use your data source and allow you to run a query against it.
package com.example
import groovy.sql.GroovyRowResult
import groovy.sql.Sql
class ExampleSqlService {
def dataSource_reporting // your named data source
List<GroovyRowResult> query(String sql) {
def db = new Sql(dataSource_reporting)
return db.rows(sql)
}
}
Using a service (like the above example) allows you to access it from basically anywhere (Controller, Service, TagLib, Domain, etc.)

Database migration using code first in mvc 4

I have created my mvc 4 application using code first and accordingly database and table also generated now i want to delete one column (from backend) of my table. so i just want to know is there any way so that changes can occur in my code automatically according to change in database.
through package manager console using migration technique
PM> enable-migrations -EnableAutomaticMigrations
in code configuration do the following
public Configuration()
{
AutomaticMigrationsEnabled = true;
AutomaticMigrationDataLossAllowed = true;
}
now when model changes do the following.
PM> update-database
Doing it through code
Use DropCreateDatabaseAlways initializer for your database. It will always recreate database during first usage of context in app domain:
Database.SetInitializer(new DropCreateDatabaseAlways<YourContextName>());
Actually if you want to seed your database, then create your own initializer, which will be inherited from DropCreateDatabaseAlways:
public class MyInitializer : DropCreateDatabaseAlways<YourContextName>
{
protected override void Seed(MagnateContext context)
{
// seed database here
}
}
And set it before first usage of context
Database.SetInitializer(new MyInitializer());
Well if you are using code first technique then remove column from your model and run migration script(google it) this will remove column from your database. But what you want is reverse which I am not sure could be done or not.

Need information about JPA based transaction for dynamic SQL table

Firstly, I would like to state our environment details.
We are trying to use EJB-hibernate with sql Azure to create apps on Azure cloud using Eclipse.
We needed to create and transact on databases dynamically. We are able to create databases dynamically. However, on trying to transact on these we are getting an error:
"java.sql.SQLException: No suitable driver found for connection url"
When we tried statically transacting using jpa was not a problem. However, dynamic transactions cannot be done. The entitymanager object is created but not able to connect database.
Could someone help us and explain how we can handle transactions using JPA for dynamically created databases.
Thanks,
Saugata
[edit] We are using the following persistence.xml:
>org.hibernate.ejb.HibernatePersistence
java:jboss/EDS</jta-data-source> -->
net.oauth.database.Co
net.oauth.database.Cr
value="org.hibernate.transaction.JTATransactionFactory" />
value="org.hibernate.transaction.JBossTransactionManagerLookup" />
Our code to connect to the db is as follows:
Map configOverrides = new HashMap();
configOverrides.put("hibernate.connection.password", "");
configOverrides.put("hibernate.connection.username", "");
configOverrides.put("hibernate.connection.driver_class","com.microsoft.sqlserver.jdbc.SQLServerDriver");
configOverrides.put("hibernate.connection.url", "jdbc:sqlsever://;" + "databaseName=;user=;password=");
EntityManagerFactory factory = Persistence.createEntityManagerFactory(ENTERPRISE_UNIT_NAME, configOverrides);
Please note that we are trying to create and connect to db dynamically and hence to do not the db created statically.
For this we are getting the error:
"java.sql.SQLException: No suitable driver found for connection url"
Create a persistence.xml with a persistence unit and put everything there which is static (eg database dialect, logging parameters, etc.)
Then use the following method to create the entity manager:
javax.persistence.Persistence.createEntityManagerFactory(String persistenceUnitName, Map properties);
Supply the variable parameters in the map, like this:
properties.put("hibernate.connection.url", "jdbc:postgresql://127.0.0.1/test");
properties.put("hibernate.connection.username", "joe");
properties.put("hibernate.connection.password", "pass");

Fluent NHibernate - Empty Schema SQL Files

I am still quite confused about NHibernate schema export and creation. What I want to achieve is to export schema drop-create sql file AND/OR recreate database schema depending on the application configuration.
Obviously I started with
private void BuildSchema(NHConf.Configuration cfg){
var schema = new SchemaExport(cfg);
schema.SetOutputFile(filename);
schema.Create(true, true);
schema.Drop(true, true);
}
But recently I have figured out, that what actually causes my schema to recreate is NHConf.Environment.Hbm2ddlAuto set to 'create' and SchemaExport has nothing to it.
Also the files with exported SQL schema exists but they are all empty (0KB), which is my main issue, as I manage schema recreation by Hbm2ddlAuto property.
Any ideas?
EDIT:
The BuildSchema method is called just before cfg.BuildSessionFactory()
I use FluentNHibernate with NH 3.1 and Oracle 11g
in your method you execute drop-create and then drop and also enabled writing to database.
this is enough to create the files, make sure you set filename correctly
new SchemaExport(config)
.SetDelimiter(";")
.SetOutputFile(filename)
.Create(false, false);
to create it in database, this works for me
new SchemaExport(config).Create(false, true);
If you are using Fluent configuration, check your mapping file for:
SchemaAction.None();
In my case I commented this line and schema export to file now works!
This post moved me in the right direction: http://lostechies.com/rodpaddock/2010/06/29/using-fluent-nhibernate-with-legacy-databases/
SchemaAction.None();
The next interesting feature is SchemaAction.None(). When developing our applications I have an integration test that is used to build all our default schema. I DONT want these table to be generated in our schema, they are external. SchemaAction.None() tells NHibernate not to create this entity in the database.

If I use groovy sql class in grails, does it use the grails connection pooling?

From the examples below in the sql documentation. If I use either of these ways to create a sql instance in the middle of a grails service class, will it use the grails connection pooling? Will it participate in any transaction capabilities? Do I need to close the connection myself? Or will it automatically go back into the pool?
def db = [url:'jdbc:hsqldb:mem:testDB', user:'sa', password:'', driver:'org.hsqldb.jdbcDriver']
def sql = Sql.newInstance(db.url, db.user, db.password, db.driver)
or if you have an existing connection (perhaps from a connection pool) or a datasource use one of the constructors:
def sql = new Sql(datasource)
Now you can invoke sql, e.g. to create a table:
sql.execute '''
create table PROJECT (
id integer not null,
name varchar(50),
url varchar(100),
)
'''
If you execute:
Sql.newInstance(...)
You will create a new connection and you aren't using the Connection Pool.
If you want to use the connection pool, you can create a Service with the following command:
grails create-service org.foo.MyService
Then, in your MyService.groovy file, you can manage transactions as follows:
import javax.annotation.PostConstruct
class MyService {
def dataSource // inject the datasource
static transactional = true // tell groovy that the service methods will be transactional
def doSomething() {
sql = new Sql(dataSource)
//rest of your code
}
}
For more details you can read: http://grails.org/doc/2.0.x/guide/services.html
EDIT:
For manage multiple datasources you can do one of the following based on your Grails version.
If you are using a Grails version greater than 1.1.1 (not 2.x) you can use the following plugin:
http://grails.org/plugin/datasources
If you are using Grails 2.x you can use the out of the box support:
http://grails.org/doc/2.0.0.RC1/guide/conf.html#multipleDatasources
If you create the Sql object like this I believe it will use connection pooling
class SomeSerive {
SessionFactory sessionFactory
def someMethod() {
Sql sql = new Sql(sessionFactory.currentSession.connection())
}
}