I am facing issues while execution of hplsql.
Added the mapreduce tasks count as well.
But execution is not passed, due to some error in the code. Can someone please help?
Please help me if I am making any mistake in the execution.
./hplsql -e "SET mapred.reduce.tasks=1;SELECT
EAM_ASSET_BILL_OF_MATERIALS.QTY QTY,EAM_ASSET_BILL_OF_MATERIALS.AUX_DESC
AUX_DESC,EIM_STOCK_LOCATION_QUANTITIES.LOC_CODE
LOC_CODE,EIM_STOCK_LOCATION_QUANTITIES.NEW_QTY
NEW_QTY,EIM_STOCK_LOCATION_QUANTITIES.REB_QTY
REB_QTY,EIM_STOCK_LOCATION_QUANTITIES.CAP_QTY
CAP_QTY,EIM_STOCK_LOCATION_QUANTITIES.PRIMARY_FLAG
PRIMARY_FLAG,TSW_CODES.DESCRIPTION CATEGORY,TSW_PARTS.PART_NO
PART_NO,TSW_PARTS.NOUN NOUN,TSW_PARTS.QUALIFIER
QUALIFIER,TSW_PARTS.DESCRIPTION DESCRIPTION FROM
ABC_ORCL_WAS_M004P.EAM_ASSET_BILL_OF_MATERIALS EAM_ASSET_BILL_OF_MATERIALS
JOIN ABC_ORCL_WAS_M004P.TSW_CODES TSW_CODES ON
EAM_ASSET_BILL_OF_MATERIALS.CATEGORY_ID=TSW_CODES.CODE_ID JOIN
ABC_ORCL_WAS_M004P.TSW_PARTS TSW_PARTS ON
EAM_ASSET_BILL_OF_MATERIALS.CHILD_STK_NO=TSW_PARTS.PART_NO JOIN
ABC_ORCL_WAS_M004P.EIM_STOCK_LOCATION_QUANTITIES
EIM_STOCK_LOCATION_QUANTITIES ON
TSW_PARTS.PART_NO=EIM_STOCK_LOCATION_QUANTITIES.STK_NO WHERE 1 = 1 AND
EIM_STOCK_LOCATION_QUANTITIES.PRIMARY_FLAG = 'Y'"
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/u01/tomcat/ABCD/Tomcat/webapps/ajc/WEB-INF/lib/tika-app-1.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/tomcat/ABCD/Tomcat/webapps/ajc/WEB-INF/lib/orc-tools-1.2.0-uber.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/u01/tomcat/ABCD/Tomcat/webapps/ajc/WEB-INF/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (org.apache.hive.jdbc.Utils).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Open connection: jdbc:hive2://localhost:10000 (494 ms)
Starting query
Unhandled exception in HPL/SQL
java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:279)
at org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:375)
at org.apache.hive.hplsql.Conn.executeQuery(Conn.java:63)
at org.apache.hive.hplsql.Exec.executeQuery(Exec.java:554)
at org.apache.hive.hplsql.Exec.executeQuery(Exec.java:563)
at org.apache.hive.hplsql.Select.select(Select.java:74)
at org.apache.hive.hplsql.Exec.visitSelect_stmt(Exec.java:993)
at org.apache.hive.hplsql.Exec.visitSelect_stmt(Exec.java:51)
at org.apache.hive.hplsql.HplsqlParser$Select_stmtContext.accept(HplsqlParser.java:14249)
at org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visitChildren(AbstractParseTreeVisitor.java:70)
at org.apache.hive.hplsql.Exec.visitStmt(Exec.java:985)
at org.apache.hive.hplsql.Exec.visitStmt(Exec.java:51)
at org.apache.hive.hplsql.HplsqlParser$StmtContext.accept(HplsqlParser.java:998)
at org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visitChildren(AbstractParseTreeVisitor.java:70)
at org.apache.hive.hplsql.HplsqlBaseVisitor.visitBlock(HplsqlBaseVisitor.java:28)
at org.apache.hive.hplsql.HplsqlParser$BlockContext.accept(HplsqlParser.java:438)
at org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visitChildren(AbstractParseTreeVisitor.java:70)
at org.apache.hive.hplsql.Exec.visitProgram(Exec.java:893)
at org.apache.hive.hplsql.Exec.visitProgram(Exec.java:51)
at org.apache.hive.hplsql.HplsqlParser$ProgramContext.accept(HplsqlParser.java:381)
at org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visit(AbstractParseTreeVisitor.java:42)
at org.apache.hive.hplsql.Exec.run(Exec.java:753)
at org.apache.hive.hplsql.Exec.run(Exec.java:729)
at org.apache.hive.hplsql.Hplsql.main(Hplsql.java:23)
Related
I'm getting below errors after enabling the performance tuning parameters,
Performance tuning parameters used,
optimizer.join-reordering-strategy=AUTOMATIC optimizer.join_distribution_type=AUTOMATIC experimental.enable-dynamic-filtering=TRUE
I'm using amazon emr,
presto version: Presto CLI 0.267-amzn-1
I'm adding these parameters,
/etc/presto/conf/config.properties
`2022-07-11T11:02:36.728Z ERROR main com.facebook.presto.server.PrestoServer Unable to create injector, see the following errors:
Configuration property 'optimizer.join_distribution_type' was not used
at com.facebook.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:244)
1 error
com.google.inject.CreationException: Unable to create injector, see the following errors:
Configuration property 'optimizer.join_distribution_type' was not used
at com.facebook.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:244)
1 error
at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:543)
at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:159)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:106)
at com.google.inject.Guice.createInjector(Guice.java:87)
at com.facebook.airlift.bootstrap.Bootstrap.initialize(Bootstrap.java:251)
at com.facebook.presto.server.PrestoServer.run(PrestoServer.java:143)
at com.facebook.presto.server.PrestoServer.main(PrestoServer.java:85)
2022-07-11T11:02:42.674Z INFO main com.facebook.airlift.log.Logging Disabling stderr output`
Any idea how to fix this issue?
When I am running buildindex in my Websphere application, I have this error in buildindex log:
[2021/05/10 15:41:57:590 GMT] I Data import pre-processing completed in 0.389 seconds for table TI_CAT_EXTENDED_41060.
[2021/05/10 15:41:57:591 GMT] I /opt/IBM/WebSphere/CommerceServer80/instances/auth/search/pre-processConfig/MC_41060/DB2/wc-dataimport-preprocess-catentry-metainf.xml
[2021/05/10 15:41:57:591 GMT] I
Table name: TI_X_CATENT_META_INF_410600
Fetch size: 500
Batch size: 500
[2021/05/10 15:41:58:048 GMT] I Error for batch element #415: DB2 SQL Error: SQLCODE=-302, SQLSTATE=22001, SQLERRMC=null, DRIVER=4.19.77
[2021/05/10 15:41:58:048 GMT] I SQL: SELECT CATENTRY_ID, TITLE, TITLE_KEYWORDS, SHORT_DESC, SHORT_DESC_KEYWORDS, LONG_DESC, LONG_DESC_KEYWORDS, LOCALE FROM X_CATENT_META_INF WHERE STORE_ID = 41006
[2021/05/10 15:41:58:087 GMT] I
The program exiting with exit code: 1.
Data import pre-processing was unsuccessful. An unrecoverable error has occurred.
[2021/05/10 15:41:58:091 GMT] E com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain:handleExecutionException Exception message: CWFDIH0002: An SQL exception was caught. The following error occurred: [jcc][t4][102][10040][4.19.77] Batch failure. The batch was submitted, but at least one exception occurred on an individual member of the batch.
Use getNextException() to retrieve the exceptions for specific batched elements. ERRORCODE=-4229, SQLSTATE=null., stack trace: com.ibm.commerce.foundation.dataimport.exception.DataImportSystemException: CWFDIH0002: An SQL exception was caught. The following error occurred: [jcc][t4][102][10040][4.19.77] Batch failure. The batch was submitted, but at least one exception occurred on an individual member of the batch.
Use getNextException() to retrieve the exceptions for specific batched elements. ERRORCODE=-4229, SQLSTATE=null.
at com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain.processDataConfig(DataImportPreProcessorMain.java:1515)
at com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain.execute(DataImportPreProcessorMain.java:1331)
at com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain.main(DataImportPreProcessorMain.java:534)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:56)
at java.lang.reflect.Method.invoke(Method.java:620)
at com.ibm.ws.bootstrap.WSLauncher.main(WSLauncher.java:280)
Caused by: com.ibm.db2.jcc.am.BatchUpdateException: [jcc][t4][102][10040][4.19.77] Batch failure. The batch was submitted, but at least one exception occurred on an individual member of the batch.
Use getNextException() to retrieve the exceptions for specific batched elements. ERRORCODE=-4229, SQLSTATE=null
at com.ibm.db2.jcc.am.b4.a(b4.java:475)
at com.ibm.db2.jcc.am.Agent.endBatchedReadChain(Agent.java:414)
at com.ibm.db2.jcc.am.ki.a(ki.java:5342)
at com.ibm.db2.jcc.am.ki.c(ki.java:4929)
at com.ibm.db2.jcc.am.ki.executeBatch(ki.java:3045)
at com.ibm.commerce.foundation.dataimport.preprocess.AbstractDataPreProcessor.populateTable(AbstractDataPreProcessor.java:373)
at com.ibm.commerce.foundation.dataimport.preprocess.StaticAttributeDataPreProcessor.process(StaticAttributeDataPreProcessor.java:461)
at com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain.processDataConfig(DataImportPreProcessorMain.java:1482)
... 7 more
The exception seems to be clear, but I can't identify what is the element #415 in batch. Even the log doesn't helps, because it doesn't point to another more detailed log. Do you have any suggestion for find it?
Thanks to the comment of user #mao, I have followed this link
The failing table first must be identified. Enable more detailed tracing for di-preprocess:
Navigate to :
WC_installdir/instances/instance_name/xml/config/dataimport
and open the logging.properties file. Find all instances of INFO and
change it to FINEST. Optionally increase the size of the log file and
the number of historical log files while editing this file.
Thanks to this suggestion, I had re-run the buildindex process, and found that solr was wrongly grouping fields from original table, thus generating a too long field for the destination, and generating the error.
Facing this error:
10:28:30.552 [main][] INFO com.intuit.karate - >> lock acquired, begin callSingle: classpath:preview-srs-session.feature
Exception in thread "main" java.lang.NoSuchFieldError: toStringWriter
at com.intuit.karate.JsonUtils$NashornObjectJsonWriter.writeJSONString(JsonUtils.java:76)
at com.intuit.karate.JsonUtils$NashornObjectJsonWriter.writeJSONString(JsonUtils.java:68)
at net.minidev.json.JSONValue.writeJSONString(JSONValue.java:596)
at net.minidev.json.reader.JsonWriter.writeJSONKV(JsonWriter.java:354)
at net.minidev.json.reader.JsonWriter$7.writeJSONString(JsonWriter.java:141)
at net.minidev.json.reader.JsonWriter$7.writeJSONString(JsonWriter.java:123)
at net.minidev.json.JSONValue.writeJSONString(JSONValue.java:596)
at net.minidev.json.reader.JsonWriter.writeJSONKV(JsonWriter.java:354)
at net.minidev.json.reader.JsonWriter$7.writeJSONString(JsonWriter.java:141)
at net.minidev.json.reader.JsonWriter$7.writeJSONString(JsonWriter.java:123)
at com.intuit.karate.JsonUtils$NashornObjectJsonWriter.writeJSONString(JsonUtils.java:78)
at com.intuit.karate.JsonUtils$NashornObjectJsonWriter.writeJSONString(JsonUtils.java:68)
at net.minidev.json.JSONValue.writeJSONString(JSONValue.java:596)
at net.minidev.json.JSONValue.toJSONString(JSONValue.java:632)
at net.minidev.json.JSONValue.toJSONString(JSONValue.java:610)
at com.intuit.karate.JsonUtils.toJson(JsonUtils.java:130)
at com.intuit.karate.JsonUtils.toJsonDoc(JsonUtils.java:172)
at com.intuit.karate.ScriptValue.<init>(ScriptValue.java:425)
at com.intuit.karate.ScriptValue.<init>(ScriptValue.java:417)
at com.intuit.karate.ScriptValueMap.put(ScriptValueMap.java:30)
at com.intuit.karate.core.ScenarioContext.<init>(ScenarioContext.java:292)
at com.intuit.karate.StepActions.<init>(StepActions.java:53)
at com.intuit.karate.core.ScenarioExecutionUnit.init(ScenarioExecutionUnit.java:141)
at com.intuit.karate.core.ScenarioExecutionUnit.run(ScenarioExecutionUnit.java:236)
at com.intuit.karate.core.FeatureExecutionUnit.run(FeatureExecutionUnit.java:164)
at com.intuit.karate.core.FeatureExecutionUnit.run(FeatureExecutionUnit.java:73)
at com.intuit.karate.core.Engine.executeFeatureSync(Engine.java:109)
at com.intuit.karate.Script.evalFeatureCall(Script.java:1769)
at com.intuit.karate.Script.evalFeatureCall(Script.java:1745)
at com.intuit.karate.core.ScriptBridge.call(ScriptBridge.java:421)
at com.intuit.karate.core.ScriptBridge.callSingle(ScriptBridge.java:450)
at jdk.nashorn.internal.scripts.Script$Recompilation$7$11$\^eval\_.L:1(<eval>:92)
at jdk.nashorn.internal.runtime.ScriptFunctionData.invoke(ScriptFunctionData.java:637)
at jdk.nashorn.internal.runtime.ScriptFunction.invoke(ScriptFunction.java:494)
at jdk.nashorn.internal.runtime.ScriptRuntime.apply(ScriptRuntime.java:393)
at jdk.nashorn.api.scripting.ScriptObjectMirror.call(ScriptObjectMirror.java:117)
at com.intuit.karate.Script.evalJsFunctionCall(Script.java:1693)
at com.intuit.karate.Script.call(Script.java:1650)
at com.intuit.karate.Script.callAndUpdateConfigAndAlsoVarsIfMapReturned(Script.java:1786)
at com.intuit.karate.core.ScenarioContext.<init>(ScenarioContext.java:267)
at com.intuit.karate.StepActions.<init>(StepActions.java:53)
at com.intuit.karate.core.ScenarioExecutionUnit.init(ScenarioExecutionUnit.java:141)
at com.intuit.karate.core.ScenarioExecutionUnit.run(ScenarioExecutionUnit.java:236)
at com.intuit.karate.core.FeatureExecutionUnit.run(FeatureExecutionUnit.java:164)
at com.intuit.karate.core.FeatureExecutionUnit.run(FeatureExecutionUnit.java:73)
at com.intuit.karate.core.Engine.executeFeatureSync(Engine.java:109)
at com.intuit.karate.IdeUtils.exec(IdeUtils.java:64)
at cucumber.api.cli.Main.main(Main.java:36)
Disconnected from the target VM, address: '127.0.0.1:61606', transport: 'socket'
Process finished with exit code 1
This is because sometimes when you mix Karate into a project with extra Java dependencies, some of the JSON handling code can get confused.
Thanks to this article, the solution is, you can explicitly load the dependency:
testCompile "net.minidev:json-smart:2.3"
I have packaged my app with :
mvn -Pprod package
Then I ran
java -jar myapp-0.0.1-SNAPSHOT.war
it works fine.
But if I run :
java -jar myapp-0.0.1-SNAPSHOT.war --spring.profiles.active=prod
I am getting this error:
[ERROR] org.springframework.boot.context.embedded.tomcat.ServletContextInitializerLifecycleListener - Error starting Tomcat context: org.springframework.beans.factory.BeanCreationException
[WARN] org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext - Exception encountered during context initialization - cancelling refresh attempt
org.springframework.context.ApplicationContextException: Unable to start embedded container; nested exception is org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:124) [spring-boot-1.2.0.RELEASE.jar!/:1.2.0.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:474) ~[spring-context-4.1.3.RELEASE.jar!/:4.1.3.RELEASE]
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:109) [spring-boot-1.2.0.RELEASE.jar!/:1.2.0.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:691) [spring-boot-1.2.0.RELEASE.jar!/:1.2.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:321) [spring-boot-1.2.0.RELEASE.jar!/:1.2.0.RELEASE]
at com.myapp.Application.main(Application.java:57) [classes!/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_25]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_25]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_25]
at java.lang.reflect.Method.invoke(Method.java:483) ~[na:1.8.0_25]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:53) [bioandbio-0.0.1-SNAPSHOT.war:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]
Caused by: org.springframework.boot.context.embedded.EmbeddedServletContainerException: Unable to start embedded Tomcat
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.initialize(TomcatEmbeddedServletContainer.java:97) ~[spring-boot-1.2.0.RELEASE.jar!/:1.2.0.RELEASE]
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer.<init>(TomcatEmbeddedServletContainer.java:74) ~[spring-boot-1.2.0.RELEASE.jar!/:1.2.0.RELEASE]
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getTomcatEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:374) ~[spring-boot-1.2.0.RELEASE.jar!/:1.2.0.RELEASE]
at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getEmbeddedServletContainer(TomcatEmbeddedServletContainerFactory.java:150) ~[spring-boot-1.2.0.RELEASE.jar!/:1.2.0.RELEASE]
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.createEmbeddedServletContainer(EmbeddedWebApplicationContext.java:148) [spring-boot-1.2.0.RELEASE.jar!/:1.2.0.RELEASE]
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(EmbeddedWebApplicationContext.java:121) [spring-boot-1.2.0.RELEASE.jar!/:1.2.0.RELEASE]
... 11 common frames omitted
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire method: public void org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration.setFilterChainProxySecurityConfigurer(org.springframework.security.config.annotation.ObjectPostProcessor,java.util.List) throws java.lang.Exception; nested exception is org.springframework.beans.factory.BeanExpressionException: Expression parsing failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'securityConfiguration': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private org.springframework.security.core.userdetails.UserDetailsService com.myapp.config.SecurityConfiguration.userDetailsService; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'userDetailsService': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private com.myapp.repository.UserRepository com.myapp.security.UserDetailsService.userRepository; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'userRepository': Cannot create inner bean '(inner bean)#1f1288f5' of type [org.springframework.orm.jpa.SharedEntityManagerCreator] while setting bean property 'entityManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)#1f1288f5': Cannot resolve reference to bean 'entityManagerFactory' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [com/myapp/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.ValidationFailedException: Validation Failed:
1 change sets check sum
classpath:config/liquibase/changelog/00000000000000_initial_schema.xml::00000000000001::jhipster is now: 7:788e6cd59e4764c45e1b83437356e748
I don't understand why prod profile causes this issue.
If someone knows what's wrong here
Thank you.
Reading all the way to the bottom of the stack trace, I see that the root cause is a liquibase checksum validation failure. I'm not sure how familiar you are with liquibase, but it is a tool that is embedded inside jhipster that is used to manage database schema changes as your objects change. What liquibase does is use an XML format to describe the database schema as a series of 'changesets'. When liquibase deploys a changeset (i.e. a changeset that says 'changeset with id "CreateTableFoo" is "create table foo with columns bar baz etc") is that it actually creates the table, and then it also adds a row to a table called 'databasechangelog' that records that the changeset "CreateTableFoo" was successfully applied at such and such a time, and the checksum of that changeset was some value. Now, if you come along and change the changeset xml file so that the changeset "CreateTableFoo" instead creates a table with a different name or different columns, then the checksum calculated for that changeset also changes, so when you try to update the schema liquibase says "Wait! Something wrong!"
So that is what is happening here.
Error creating bean with name 'liquibase' defined in class path resource [com/myapp/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.ValidationFailedException: Validation Failed:
1 change sets check sum
classpath:config/liquibase/changelog/00000000000000_initial_schema.xml::00000000000001::jhipster is now: 7:788e6cd59e4764c45e1b83437356e748
What this indicates is that the prod database already has had a liquibase update run on it to populate the schema, but that the changeset has changed since that initial deploy. I am not familiar with how jhipster uses liquibase, so you would need to look at the file classpath:config/liquibase/changelog/00000000000000_initial_schema.xml to see what database schema it is trying to set up, and compare that to the schema that is actually in place in production.
Now that things are out of sync, you'll have to figure out a way to get them back in sync, which is more complicated than can be answered here and depends greatly on what the differences between the changelog and the production schema are.
I thought that the default configuration of the objectstore module in Mule was In-Memory (http://mulesoft.github.io/mule-module-objectstore/mule/objectstore-config.html#config)
I have an objectstore configured as such in my app:
<objectstore:config name="sourceConfigStore" entryTtl="60000" ></objectstore:config>
I reference the store from a java component like so:
ObjectStoreModule objectStore = (ObjectStoreModule) eventContext.getMuleContext().getRegistry().lookupObject("sourceConfigStore");
objectStore.store((String)sourceConfig.get("url"), sourceConfig, true);
This works for the most part, except I discovered today that this was writing files to disk when I got the following error:
Message : Unable to create a canonical file for parent: C:\git-ucd\.mule\.mule\edus-esb-rss-aggregator\objectstore and child: DEFAULT_PARTITION\news.ucdavis.edu/xml/getnews.php?type=category&categories=General+Interest&format=rss.obj (org.mule.api.MuleRuntimeException)
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. Invalid argument (java.io.IOException)
java.io.WinNTFileSystem:-2 (null)
2. Unable to create a canonical file for parent: C:\git-ucd\.mule\.mule\edus-esb-rss-aggregator\objectstore and child: DEFAULT_PARTITION\news.ucdavis.edu/xml/getnews.php?type=category&categories=General+Interest&format=rss.obj (org.mule.api.MuleRuntimeException)
org.mule.util.FileUtils:402 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/MuleRuntimeException.html)
3. Unable to create a canonical file for parent: C:\git-ucd\.mule\.mule\edus-esb-rss-aggregator\objectstore and child: DEFAULT_PARTITION\news.ucdavis.edu/xml/getnews.php?type=category&categories=General+Interest&format=rss.obj (org.mule.api.MuleRuntimeException) (org.mule.api.store.ObjectStoreException)
org.mule.util.store.PartitionedPersistentObjectStore:278 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/store/ObjectStoreException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.io.IOException: Invalid argument
at java.io.WinNTFileSystem.canonicalize0(Native Method)
at java.io.Win32FileSystem.canonicalize(Win32FileSystem.java:414)
at java.io.File.getCanonicalPath(File.java:618)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
So, my question is whether or not the default behavior of the objectstore module is in fact to use an in-memory store. If that is the case, I guess my next question would be 'how did I override that default behavior with my above config and code'?
The default implementation is in memory.
Then again if you're running your application from Mule studio that's not the case as in MuleStudio by default it persists things to a file. This is why on your run configuration in the general tab you have the option to delete these files in each run.
In any case the easiest way to force the in memory will be something like this:
<objectstore:all-keys config-ref="_defaultInMemoryObjectStore"/>