I am still quite confused about NHibernate schema export and creation. What I want to achieve is to export schema drop-create sql file AND/OR recreate database schema depending on the application configuration.
Obviously I started with
private void BuildSchema(NHConf.Configuration cfg){
var schema = new SchemaExport(cfg);
schema.SetOutputFile(filename);
schema.Create(true, true);
schema.Drop(true, true);
}
But recently I have figured out, that what actually causes my schema to recreate is NHConf.Environment.Hbm2ddlAuto set to 'create' and SchemaExport has nothing to it.
Also the files with exported SQL schema exists but they are all empty (0KB), which is my main issue, as I manage schema recreation by Hbm2ddlAuto property.
Any ideas?
EDIT:
The BuildSchema method is called just before cfg.BuildSessionFactory()
I use FluentNHibernate with NH 3.1 and Oracle 11g
in your method you execute drop-create and then drop and also enabled writing to database.
this is enough to create the files, make sure you set filename correctly
new SchemaExport(config)
.SetDelimiter(";")
.SetOutputFile(filename)
.Create(false, false);
to create it in database, this works for me
new SchemaExport(config).Create(false, true);
If you are using Fluent configuration, check your mapping file for:
SchemaAction.None();
In my case I commented this line and schema export to file now works!
This post moved me in the right direction: http://lostechies.com/rodpaddock/2010/06/29/using-fluent-nhibernate-with-legacy-databases/
SchemaAction.None();
The next interesting feature is SchemaAction.None(). When developing our applications I have an integration test that is used to build all our default schema. I DONT want these table to be generated in our schema, they are external. SchemaAction.None() tells NHibernate not to create this entity in the database.
Related
I have a multi-tenancy structure set up where each client has a schema set up for them. The structure mirrors the "parent" schema, so any migration that happens needs to happen for each schema identically.
I am using Flask-Script with Flask-Migrate to handle migrations.
What I tried so far is iterating over my schema names, building a URI for them, scoping a new db.session with the engine generated from the URI, and finally running the upgrade function from flask_migrate.
#manager.command
def upgrade_all_clients():
clients = clients_model.query.all()
for c in clients:
application.extensions["migrate"].migrate.db.session.close_all()
application.extensions["migrate"].migrate.db.session = db.create_scoped_session(
options={
"bind": create_engine(generateURIForSchema(c.subdomain)),
"binds": {},
}
)
upgrade()
return
I am not entirely sure why this doesn't work, but the result is that it only runs the migration for the db that was set up when the application starts.
My theory is that I am not changing the session that was originally set up when the manager script runs.
Is there a better way to migrate each of these schemas without setting multiple binds and using the --multidb parameter? I don't think I can use SQLALCHEMY_BINDS in the config since these schemas need to be able to be dynamically created/destroyed.
For those who are encountering the same issue, the answer to my specific situation was incredibly simple.
#manager.command
def upgrade_all_clients():
clients = clients_model.query.all()
for c in clients:
print("Upgrading client '{}'...".format(c.subdomain))
db.engine.url.database = c.subdomain
_upgrade()
return
The database attribute of the db.engine.url is what targets the schema. I don't know if this is the best way to solve this, but it does work and I can migrate each schema individually.
I created the ddl scripts using liquibase by providing the input data base change log.
The code looks like this
private void toSQL(DatabaseChangeLog d)
throws DatabaseException, LiquibaseException, UnsupportedEncodingException, IOException {
FileSystemResourceAccessor fsOpener = new FileSystemResourceAccessor();
CommandLineResourceAccessor clOpener = new CommandLineResourceAccessor(this.getClass().getClassLoader());
CompositeResourceAccessor fileOpener = new CompositeResourceAccessor(new ResourceAccessor[] { fsOpener, clOpener });
Database database = CommandLineUtils.createDatabaseObject(fileOpener, this.url, this.username, this.password, this.driver,
this.defaultCatalogName, this.defaultSchemaName, Boolean.parseBoolean(this.outputDefaultCatalog),
Boolean.parseBoolean(this.outputDefaultSchema), this.databaseClass,
this.driverPropertiesFile, this.propertyProviderClass, this.liquibaseCatalogName,
this.liquibaseSchemaName, this.databaseChangeLogTableName, this.databaseChangeLogLockTableName);
Liquibase liquibase=new Liquibase(d, null, database);
liquibase.update(new Contexts(this.contexts), new LabelExpression(this.labels), getOutputWriter());
}
and my liquibase.properties goes like this
url=jdbc\:sqlserver\://server\:1433;databaseName\=test
username=test
password=test#123
driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
referenceUrl=hibernate:spring:br.com.company.vacation.domain?dialect=org.hibernate.dialect.SQLServer2008Dialect
As you can see, Liquibase is expecting a lot of db parameters such as url,username,password,driver, which I will not be able to provide.
How can I achieve this without providing any of the parameters. Is it possible?
No, it is not possible. If you want liquibase to interact with a database, you have to tell it how to connect to that database.
I investigated a little about the liquibase operation in offline mode. It goes like this.
Running in offline mode only supports updateSql, rollbackSQL, tag, and tagExists. It does not support direct update, diff, or preconditions as there is nothing to actually update or state to check.
An offline database is “connected” to using a url syntax of offline:DATABASE_TYPE?param1=value1&aparam2=value2.
The following code will suffice
this.url=offline:postgres?param1=value1&aparam2=value2;
this.driver=null;
this.username=null;
this.password=null;
Hence not providing the db details. Offline url can be made up from the store type.
I want to use a different schema to save Spring Batch tables. I can see that my new datasource in set in the JobRepositoryFactoryBean. But still the tables are been created in the other shcema where I have business tables. I read soemwhere that I can use dataSource.setValidationQuery to alter the schema, but still doesn't work. I can solve this. Below is the JobRepositoryFactoryBean and Datasource prop.
#Bean
#Qualifier("batchDataSource")
protected JobRepository createJobRepository() throws Exception {
JobRepositoryFactoryBean factory = createJobRepositoryFactoryBean();
factory.setDataSource(getDataSource());
if (getDbType() != null) {
factory.setDatabaseType(getDbType());
}
factory.setTransactionManager(getTransactionManager());
factory.setIsolationLevelForCreate(getIsolationLevel());
factory.setMaxVarCharLength(maxVarCharLength);
factory.setTablePrefix(getTablePrefix());
factory.setValidateTransactionState(validateTransactionState);
factory.afterPropertiesSet();
return factory.getObject();
}
spring.datasource.url=url
spring.datasource.username=username
spring.datasource.password=pwd
spring.datasource.driver-class-name:oracle.jdbc.driver.OracleDriver
spring.datasource.validation-query=ALTER SESSION SET
CURRENT_SCHEMA=schemaname
#batch setting
spring.batch.datasource.url=burl
spring.batch.datasource.username=busername
spring.batch.datasource.password=bpwd
spring.batch.datasource.driver-class-name:oracle.jdbc.driver.OracleDriver
spring.batch.datasource.validation-query=ALTER SESSION SET
CURRENT_SCHEMA=batchschema
org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
dataSource.setName("batchDataSourceName");
dataSource.setDriverClassName(batchDataSourceProperties.getDriverClassName());
dataSource.setUrl(batchDataSourceProperties.getUrl());
dataSource.setUsername(batchDataSourceProperties.getUsername());
dataSource.setPassword(batchDataSourceProperties.getPassword());
// dataSource.setValidationQuery(batchDataSourceProperties.getValidationQuery());
Below property in application.properties is working for me.This will create meta schema tables under new_schema in your DB.
spring.batch.tablePrefix=new_schema.BATCH_
Below is the version of springBoot I am using.
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.3.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
When using Spring Batch's #EnableBatchProcessing, the DataSource used by the Spring Batch tables is the one provided by the BatchConfigurer. If you are using more than one DataSource in your application, you must create your own BatchConfigurer (either by extending DefaultBatchConfigurer or implementing the interface) so that Spring Batch knows which to use. You can read more about this customization in the reference documentation here: https://docs.spring.io/spring-batch/4.0.x/reference/html/job.html#configuringJobRepository
Duplicate your existing data source properties and override BatchConfigurer to return this new data source. Then, in the new data source's properties, change either
The user connecting to the database to one with a default schema defined as the desired schema for the Spring Batch tables
The connection url to include the desired schema for the Spring Batch tables.
The option you choose will depend on your database type as follows:
For SQL Server you can define the default schema for the user you are using to connect to the database (I did this one).
CREATE SCHEMA batchschema;
USE database;
CREATE USER batchuser;
GRANT CREATE TABLE TO batchuser;
ALTER USER batchuser WITH DEFAULT_SCHEMA = batchschema;
ALTER AUTHORIZATION ON SCHEMA::batchschema TO batchuser;
For Postgres 9.4 you can specify schema in the connection URL using currentSchema parameter: jdbc:postgresql://host:port/db?currentSchema=batch
For Postgres before 9.4 you can specify schema in the connection URL using searchpath parameter: jdbc:postgresql://host:port/db?searchpath=batch
For Oracle it looks like the schema would need to be set on the session. I'm not exactly sure how this one would work...
ALTER SESSION SET CURRENT_SCHEMA batchschema
Qualify each DataSource, set one you wish to use for the Batch tables as #Primary, and set your datasource for the DefaultBatchConfigurer as follows:
#Bean(name="otherDataSource")
public DataSource otherDataSource() {
//...
}
#Primary
#Bean(name="batchDataSource")
public DataSource batchDataSource() {
//...
}
#Bean
BatchConfigurer configurer(#Qualifier("batchDataSource") DataSource dataSource){
return new DefaultBatchConfigurer(dataSource);
}
I'm attempting to setup NHibernate.Envers to use a separate database, schema and table suffix. For some reason, the configuration changes I'm setting at being ignored.
Example Code
var nhCfg = new Configuration().Configure();
nhCfg.IntegrateWithEnvers(new AttributeConfiguration());
nhCfg.SetEnversProperty(ConfigurationKey.AuditTableSuffix, "_Log");
nhCfg.SetEnversProperty(ConfigurationKey.DefaultCatalog, "LoggingDatabase");
nhCfg.SetEnversProperty(ConfigurationKey.DefaultSchema, "log");
Does anyone have any suggestions? I'm not sure if I am missing something to commit the configuration change.
Set envers properties before calling IntegrateWithEnvers.
Firstly, I would like to state our environment details.
We are trying to use EJB-hibernate with sql Azure to create apps on Azure cloud using Eclipse.
We needed to create and transact on databases dynamically. We are able to create databases dynamically. However, on trying to transact on these we are getting an error:
"java.sql.SQLException: No suitable driver found for connection url"
When we tried statically transacting using jpa was not a problem. However, dynamic transactions cannot be done. The entitymanager object is created but not able to connect database.
Could someone help us and explain how we can handle transactions using JPA for dynamically created databases.
Thanks,
Saugata
[edit] We are using the following persistence.xml:
>org.hibernate.ejb.HibernatePersistence
java:jboss/EDS</jta-data-source> -->
net.oauth.database.Co
net.oauth.database.Cr
value="org.hibernate.transaction.JTATransactionFactory" />
value="org.hibernate.transaction.JBossTransactionManagerLookup" />
Our code to connect to the db is as follows:
Map configOverrides = new HashMap();
configOverrides.put("hibernate.connection.password", "");
configOverrides.put("hibernate.connection.username", "");
configOverrides.put("hibernate.connection.driver_class","com.microsoft.sqlserver.jdbc.SQLServerDriver");
configOverrides.put("hibernate.connection.url", "jdbc:sqlsever://;" + "databaseName=;user=;password=");
EntityManagerFactory factory = Persistence.createEntityManagerFactory(ENTERPRISE_UNIT_NAME, configOverrides);
Please note that we are trying to create and connect to db dynamically and hence to do not the db created statically.
For this we are getting the error:
"java.sql.SQLException: No suitable driver found for connection url"
Create a persistence.xml with a persistence unit and put everything there which is static (eg database dialect, logging parameters, etc.)
Then use the following method to create the entity manager:
javax.persistence.Persistence.createEntityManagerFactory(String persistenceUnitName, Map properties);
Supply the variable parameters in the map, like this:
properties.put("hibernate.connection.url", "jdbc:postgresql://127.0.0.1/test");
properties.put("hibernate.connection.username", "joe");
properties.put("hibernate.connection.password", "pass");