Eclipselink 2.7.3 not performing 'drop-and-create' - eclipselink

Using the following config in persistence.xml
<property name="eclipselink.flush-clear.cache" value="Drop"/>
<property name="eclipselink.cache.shared.default" value="false"/>
<property name="eclipselink.cache.shared" value="false"/>
<property name="eclipselink.ddl-generation" value="drop-and-create-tables"/>
According to the docs that is the configuration:
EclipseLink will attempt to DROP all tables, then CREATE all tables.
If any issues are encountered, EclipseLink will follow the default
behavior of your specific database and JDBC driver combination, then
continue with the next statement.
This is useful in development if the schema frequently changes or
during testing when the existing data needs to be cleared.
Note: Using drop-and-create will remove all of the data in the tables
when they are dropped. You should never use option on a production
schema that has valuable data in the database. If the schema changed
dramatically, there could be old constraints in the database that
prevent the dropping of the old tables. This may require the old
schema to be dropped through another mechanism.
Using the following dependencies:
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>org.eclipse.persistence.jpa.modelgen.processor</artifactId>
<version>2.7.3</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.eclipse.persistence</groupId>
<artifactId>org.eclipse.persistence.jpa</artifactId>
<version>2.7.3</version>
</dependency>
What am I doing wrong ?

Related

Spring config server with JDBC is throwing Invalid config server configuration error

I want to use JDBC mysql with Spring cloud config server, but always failed, this is what I am doing:
Spring cloud version: Finchley.SR2
In POM.xml
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-jdbc</artifactId>
<exclusions>
<exclusion>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-jdbc</artifactId>
</exclusion>
</exclusions>
</dependency>
Inside the application.config:
spring.profiles.active= jdbc
spring.datasource.url=jdbc:mysql://localhost:3306/config_db
spring.datasource.username=root
spring.datasource.password=12345
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.platform= mysql
spring.cloud.config.server.jdbc.sql= SELECT `key`, `value` FROM `properties` WHERE `application`=? AND `profile`=? AND `label`=?;
spring.cloud.config.server.jdbc.order=0
spring.cloud.config.server.default-profile=production
spring.cloud.config.server.default-label=latest
Finally, when I start server, I am getting below errors:
APPLICATION FAILED TO START
Description:
Invalid config server configuration.
Action:
If you are using the git profile, you need to set a Git URI in your configuration. If you are using a native profile and have spring.cloud.config.server.bootstrap=true, you need to use a composite configuration.
I am not using git here, why the error is about git url?
I had the same issue when used MySQL.
It seems to be an issue with MySQL JdbcTemplate (look here).
I switched to H2 to store configuration and it works.
I wonder if there any workaround to use MySQL?
I got the same error when I attempt to remove the DataSourceAutoConfiguration.class on start up i.e.
#SpringBootApplication(exclude = {DataSourceAutoConfiguration.class })
When I just used
#SpringBootApplication
everything worked as expected.
My reason for excluding the class was to stop the auto generation of a password on startup.

Using liquibase with Altibase

We already have an existing project with liquibase scripts (mysql, postgresql). Now we want to support a new database named Altibase. But when we run liquibase:dropAll liquibase:update we got:
[ERROR] Failed to execute goal org.liquibase:liquibase-maven-plugin:3.5.3:dropAll
(default-cli) on project project-model: Error setting up or running Liquibase:
liquibase.exception.LockException: liquibase.exception.DatabaseException:
Data type module (Name="DATETIME") not found.
[Failed SQL:
CREATE TABLE ALTIBASE.DATABASECHANGELOGLOCK (
ID INT NOT NULL, LOCKED BOOLEAN NOT NULL, LOCKGRANTED datetime,
LOCKEDBY VARCHAR(255),
CONSTRAINT PK_DATABASECHANGELOGLOCK PRIMARY KEY (ID))] -> [Help 1]
Here's the pom's configuration:
<profile>
<id>altibase</id>
<activation>
<property>
<name>env</name>
<value>altibase</value>
</property>
</activation>
<properties>
<db.driver>Altibase.jdbc.driver.AltibaseDriver</db.driver>
<db.url>jdbc:Altibase://ourdomain.cloud:20001/ourdb</db.url>
<db.schema>ALTIBASE</db.schema>
<db.username>admin</db.username>
<db.password>admin</db.password>
</properties>
<dependencies>
<dependency>
<groupId>com.altibase</groupId>
<artifactId>Altibase</artifactId>
<version>1.0.1.2</version>
<scope>provided</scope>
</dependency>
</dependencies>
</profile>
Note that we installed Altibase jar locally via maven.
I guess the DB is not directly supported by liquibase. And it does not seem to have a data type DATETIME.
Checkout the section "Using Unsupported Databases" on the this site: https://www.liquibase.org/databases.html.
Maybe use the optional parameter --currentDateTimeFunction=<value>. Also see this site: https://www.liquibase.org/documentation/command_line.html
Altibase has DATE data type. And, It's DATE data type is same with other DBMSs' DATETIME data type.
Altibase's DATE data type can handle date data with micro seconds unit.

Infinispan configuration with Wilfdfly-Swarm

I have a project on Wildfly-Swarm. I need to use query cache, so I put Infinispan dependency on my POM.
<dependency>
<groupId>org.wildfly.swarm</groupId>
<artifactId>infinispan</artifactId>
</dependency>
I setup Infinispan on my persistence.xml
<persistence-unit name="Condominio" transaction-type="JTA">
<shared-cache-mode>ENABLE_SELECTIVE</shared-cache-mode>
<properties>
<property name="hibernate.cache.use_query_cache" value="true"/>
<property name="hibernate.cache.infinispan.entity.expiration.max_idle" value="300000"/>
...
<properties>
</persistence-unit>
It's work well, the cache works. But I read the documentation of Infinispan Wildfly-Swarm fraction (https://reference.wildfly-swarm.io/fractions/infinispan.html) and I was in doubt if should setup it on project-defaults.yml using those configurations.
I don't know which configurations of Infinispan Wildfly-Swarm fraction are equivalents to persistence.xml configurations.

Maven Profile - Activate Profile depending on packaging

I have a POM which declares web application stuff that is common to my projects. I use this as the parent for all web applications.
Is it possible to activate a profile only when the packaging is war? I have tried the property approach, but that doesn't work (as it isn't a system/environment property).
Since this fails the build, I can simply disable that profile when installing the POM, but I'd like it to be more intelligent on its own.
Walter
You can simply check the existence of src/main/webapp. Each web application that uses the Maven standard directory layout should contain this folder. So you avoid unnecessary dummy files.
<profile>
<id>custom-profile-eclipse-project-generation-webapp</id>
<activation>
<file>
<exists>${basedir}/src/main/webapp</exists>
</file>
</activation>
<build>
</build>
</profile>
More precise you can also check for the the existence of ${basedir}/src/main/webapp/WEB-INF/web.xml. That should definitively identify a war-project.
For myself I use this configuration in my common super-pom to configure the maven-eclipse-plugin for different project types. Thats very handy to get homogenous eclipse-configurations over the same project type in our organization, especially when developers straightforwardly run eclipse:eclipse on multi-module-projects.
I know this isn't answering your question directly, but the usual workaround for problems like this is to just use specialization (as with classes).
So you have your MasterPom with all common behavior.
MasterWarPom that extends MasterPom (is it's parent), and put any 'packing is war' specializations in here.
Likewise you could have MasterJarPom, etc ...
That way the differences are split out nicely.
There's no clean way to do that, the parent module has no way of knowing the child's packaging. (Non-clean solutions would involve creating a plugin that parses the child module's pom etc.)
The best I've been able to come up with for these sorts scenarios has been to use a file-based activation trigger.
eg my parent pom has
<profile>
<id>maven-war-project</id>
<activation>
<file><!-- add a file named .maven-war-project-marker to webapp projects to activate this profile -->
<exists>${basedir}/.maven-war-project-marker</exists>
</file>
</activation>
<build>
<plugins>
<!-- configuration for webapp plugins here -->
</plugins>
</build>
and webapp projects that inherit from this parent contain a file named
'.maven-war-project-marker'
that activates the profile
This looks pretty obtuse but works fairly reliably whereas
- using property-activation is unreliable if a different person or system does the build,
- inheriting from type-specific parents became a bit cumbersome for me as the grandparent-pom changes version relatively frequently as it is used to define 'standard' or preferred versions of common dependencies which in turn required corresponding releases of all of the type-specific parents with no change other than the grandparent version
Try in this way ?
mvn package -Dmaven.test.skip=true -Dwar
<project ×××××>
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>××××</groupId>
<artifactId>×××××</artifactId>
<version>×××××</version>
<relativePath>../../</relativePath>
</parent>
<artifactId>×××××</artifactId>
<name>${project.artifactId}-${project.version}</name>
<description>${project.artifactId}-${project.version}</description>
<properties>
<packaging.type>jar</packaging.type>
</properties>
<profiles>
<profile>
<activation>
<property>
<name>war</name>
</property>
</activation>
<properties>
<packaging.type>war</packaging.type>
</properties>
<build>
<finalName>ROOT</finalName>
</build>
</profile>
</profiles>
<packaging>${packaging.type}</packaging>
<dependencies>
<dependency>
... ...
</dependency>
... ...
</dependencies>

Exclude dependency in a profile

I have a maven module which has some dependencies. In a certain profile, I want to exclude some of those dependencies (to be exact, all dependencies with a certain group id). They however need to be present in all other profiles. Is there a way to specify exclusions from the dependencies for a profile?
To my knowledge, no, you can't deactivate dependencies (you can exclude transitive dependencies but this is not what you are asking for) and yes, what you are currently doing with the POM (manually editing it) is wrong.
So, instead of removing dependencies, you should put them in a profile and either:
Option #1: use the profile when required or
Option #2: mark the profile as activated by default or put it in the list of active profiles and deactivate it when required.
A third option would be (not profile based):
Option #3: separate things in two separated modules (as you have separated concerns) and use inheritance.
Instead of excluding dependencies in a profile, you can set them as provided in it. This doesn't require any overly complex configuration and will exclude the dependencies you don't want from the final build.
In the desired profile, add a dependencies section, copy the declaration of the ones you want to exclude and scope them as provided.
For example, let say you want to exclude slf4j-log4j12:
<profiles>
<!-- Other profiles -->
<profile>
<id>no-slf4j-log4j12</id>
<dependencies>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.2</version>
<scope>provided</scope>
</dependency>
</dependencies>
</profile>
<!-- Other profiles -->
</profiles>
One way that occurs to me is to have the dependencies in a separate pom. You can then add an <exclusions> section via the profile.
<dependencies>
<dependency>
<groupId>my.company.dependencies</groupId>
<artifactId>my-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<type>pom</type>
</dependency>
</dependencies>
<profile>
<activation>
<activeByDefault>false</activeByDefault>
<property>
<name>exclude-deps</name>
</property>
</activation>
<dependencies>
<dependency>
<groupId>my.company.dependencies</groupId>
<artifactId>my-dependencies</artifactId>
<version>1.0.0-SNAPSHOT</version>
<type>pom</type>
<exclusions>
<exclusion>
<groupId>my.company</groupId>
<artifactId>bad-dep-1</artifactId>
</exclusion>
<exclusion>
<groupId>my.company</groupId>
<artifactId>bad-dep-2</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
</profile>
I don't think it is possible to exclude direct dependencies either (at least, nothing is mentioned here).
The best thing you can do is to enclose the desired dependencies for each case into different profiles (as suggested already), but, you'll need to create two "mutually exclusive" profiles with the one of them "active by default". The most reliable way to achieve this is by using a parameter for your profile activation e.g.
<profiles>
<profile>
<id>default-profile</id>
<activation>
<property><name>!exclude</name></property>
</activation>
<dependencies>
dependency-A
dependency-B
...
</dependencies>
</profile>
<profile>
<id>exclude-profile</id>
<activation>
<property><name>exclude</name></property>
</activation>
<!-- exclude/replace dependencies here -->
</profile>
</profiles>
Then using "mvn [goal]" will use profile "default-profile", but "mvn [goal] -Dexclude" will use profile "exclude-profile".
Note that using 'activeByDefault' instead of a parameter for your "default" profile might work in some cases but it also might lead to unexpected behavior. The problem is that 'activeByDefault' makes a profile active as long as there is no other active profile in any other module of a multi-module build.
maven is a tool, we can hack it.
maven runs fine if you have the same artifact + version defined as dependency twice.
define a profile that eliminates an artifact + version by changing it to another package we already have.
For example, in the pom.xml:
... other pom stuff ...
<properties>
<artifact1>artifact1</artifact1>
<artifact2>artifact2</artifact2>
<artifact1.version>0.4</artifact1.version>
<artifact2.version>0.5</artifact2.version>
</properties>
<profile>
<id>remove-artifact2</id>
<properties>
<artifact1>artifact1</artifact1>
<artifact2>artifact1</artifact2>
<artifact1.version>0.4</artifact1.version>
<artifact2.version>0.4</artifact2.version>
</properties>
</profile>
Now if you install this pom.xml without the profile, artifact1:0.4 and artifact2:0.5 will be the dependency.
But if you install this pom.xml with the profile mvn -P remove-artifact2
The result pom.xml contains only artifact1:0.4
This comes quite handy during api migration where artifact are renamed and versions are not compatible.
Bit dirty but lightweight solution is to use <scope>import</scope>.
Unlike the other scopes you could use this:
will disable compile-time and runtime dependecies; unlike provided or runtime which disables only one at a time
won't mess up your test scope
you don't need to specify path to some dummy jar as would system scope require
Nothing gets imported as long as you use this hack outside dependencyManagement.