Apache oozie Launcher ERROR - apache

I am want to run sqoop action using apache oozie but it always giving me error:
Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.SqoopMain], main() threw exception, null
my workflow.xml is:
<workflow-app name="Datashop-Core-Sqoop" xmlns="uri:oozie:workflow:0.1">
<!-- <start to="inputAvailableCheckDecision"/>
<decision name="inputAvailableCheckDecision">
<switch>
<case to="sqoopAction">
${sqoopInputRecordCount gt minRequiredRecordCount}
</case>
<default to="end"/>
</switch>
</decision> -->
<start to="sqoopAction"/>
<action name="sqoopAction">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
<property>
<name>oozie.libpath</name>
<value>${oozieLibPath}</value>
</property>
</configuration>
<command>import --connect jdbc:mysql://${mysqlServer}/${mysqlServerDB} --username ${mysqlServerDBUID} --password ${mysqlServerDBPwd} --tabltable ${mysqlTable} -m 1 --target-dir ${targetDir}</command>
</sqoop>
<ok to="end"/>
<error to="killJob"/>
</action>
<kill name="killJob">
<message>"Killed job due to error: ${wf:errorMessage(wf:lastErrorNode())}"</message>
</kill>
<end name="end" />
</workflow-app>
this always throwing sqoopmain class exception which is finally treating as Error:
#sqoopAction] ERROR is considered as FAILED for SLA
In some stack question i found that external library is need to be included with <file>
so i changed my workflow.xml and add a line below <command> tag:
<file>${oozieLibPath}/sqoop/mysql-connector-java-5.1.37-bin.jar#mysql-connector-java-5.1.37-bin.jar</file>
But still same error is producing what may be the reason for this error. is there any thing else i need to configure?

Related

Issue in 2 node cluster using infinispan 5.3 | ActionStatus.ABORT_ONLY > is not in a valid state to be invoking cache operations on

I have a setup of 2 node cluster using Infinispan 5.3. I am testing the failover scenario. When I killed one node, i'm getting the below exception (I'm using the sync cache). The cluster is not getting. So I need to restart the application, which is not practically possible in production environment
2020-05-06 18:50:28,082 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] ISPN000136: Execution error
java.lang.IllegalStateException: Transaction TransactionImple < ac, BasicAction: -3f57f478:dd0a:5eb2b455:2d461 status: ActionStatus.ABORT_ONLY > is not in a valid state to be invoking cache operations on.
at org.infinispan.interceptors.TxInterceptor.enlist(TxInterceptor.java:275)
at org.infinispan.interceptors.TxInterceptor.enlistIfNeeded(TxInterceptor.java:239)
at org.infinispan.interceptors.TxInterceptor.enlistReadAndInvokeNext(TxInterceptor.java:233)
at org.infinispan.interceptors.TxInterceptor.visitGetKeyValueCommand(TxInterceptor.java:229)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:120)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:134)
at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:96)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:120)
at org.infinispan.statetransfer.StateTransferInterceptor.handleTopologyAffectedCommand(StateTransferInterceptor.java:216)
at org.infinispan.statetransfer.StateTransferInterceptor.handleDefault(StateTransferInterceptor.java:200)
at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:96)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:120)
at org.infinispan.interceptors.CacheMgmtInterceptor.visitGetKeyValueCommand(CacheMgmtInterceptor.java:113)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:120)
at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:134)
at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:96)
at org.infinispan.interceptors.IsMarshallableInterceptor.visitGetKeyValueCommand(IsMarshallableInterceptor.java:97)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:120)
at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:128)
at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:92)
at org.infinispan.commands.AbstractVisitor.visitGetKeyValueCommand(AbstractVisitor.java:96)
at org.infinispan.commands.read.GetKeyValueCommand.acceptVisitor(GetKeyValueCommand.java:62)
at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:343)
at org.infinispan.CacheImpl.containsKey(CacheImpl.java:372)
at org.infinispan.DecoratedCache.containsKey(DecoratedCache.java:410)
at com.abcr.ServiceContext.existsInSyncCache(ServiceContext.java:1740)
at com.abcr.ServiceContext.getObjectForUpdateInSyncCache(ServiceContext.java:1778)
at com.abcr.core.cache.ClusterServiceNodeListCacheManager.getObjectForUpdate(ClusterServiceNodeListCacheManager.java:90)
at com.suntecgroup.tbms.tpe.core.server.ServerManager.callBackOnMembersModified(ServerManager.java:3385)
at com.abcr.core.ServiceContainerCommandDespatcher.run(ServiceContainerCommandDespatcher.java:64)
2020-05-06 18:50:28,086 ERROR [com.abcr.core.ServiceContainer] Invocation of callback APIs on leaving coordinator role failed for service 'ABC'.
com.suntecgroup.tbms.container.services.ContainerPlatformServicesException: Failed to retrieve object[SERVER/SERVICE_NODES/28000] for update.
This my infinspan and jgroups configuration
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.3 http://www.infinispan.org/schemas/infinispan-config-5.3.xsd"
xmlns="urn:infinispan:config:5.3">
<global>
<!-- Note that if these are left blank, defaults are used. See the user guide for what these defaults are -->
<asyncListenerExecutor factory="org.infinispan.executors.DefaultExecutorFactory">
<properties>
<property name="maxThreads" value="5" />
<property name="threadNamePrefix" value="AsyncListenerThread" />
</properties>
</asyncListenerExecutor>
<asyncTransportExecutor factory="org.infinispan.executors.DefaultExecutorFactory">
<properties>
<property name="maxThreads" value="25" />
<property name="threadNamePrefix" value="AsyncSerializationThread" />
</properties>
</asyncTransportExecutor>
<evictionScheduledExecutor factory="org.infinispan.executors.DefaultScheduledExecutorFactory">
<properties>
<property name="threadNamePrefix" value="EvictionThread" />
</properties>
</evictionScheduledExecutor>
<replicationQueueScheduledExecutor factory="org.infinispan.executors.DefaultScheduledExecutorFactory">
<properties>
<property name="threadNamePrefix" value="ReplicationQueueThread" />
</properties>
</replicationQueueScheduledExecutor>
<globalJmxStatistics enabled="false" jmxDomain="infinispan_1" />
<!--
If the transport is omitted, there is no way to create distributed or clustered caches.
There is no added cost to defining a transport but not creating a cache that uses one, since the transport
is created and initialized lazily.
-->
<transport clusterName="PC_SITE_1" distributedSyncTimeout="50000" transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport">
<properties>
<property name="configurationFile" value="./tmp/_clusterconfig/pc_jgroups_main_sync.xml" />
</properties>
</transport>
<!-- Note that the JGroups transport uses sensible defaults if no configuration property is defined. -->
<!-- See the JGroupsTransport javadocs for more flags -->
<!-- Again, sensible defaults are used here if this is omitted. -->
<serialization marshallerClass="org.infinispan.marshall.VersionAwareMarshaller" version="1.0" />
<!--
Used to register JVM shutdown hooks.
hookBehavior: DEFAULT, REGISTER, DONT_REGISTER
-->
<shutdown hookBehavior="DEFAULT" />
</global>
<!-- *************************** -->
<!-- Default "template" settings -->
<!-- *************************** -->
<!-- this is used as a "template" configuration for all caches in the system. -->
<default>
<!--
isolation levels supported: READ_COMMITTED and REPEATABLE_READ
-->
<locking isolationLevel="READ_COMMITTED" lockAcquisitionTimeout="60000" writeSkewCheck="false" concurrencyLevel="5000" useLockStriping="false" />
<!--
Used to register a transaction manager and participate in ongoing transactions.
-->
<!-- ECPCacheTxManagerLookup -->
<!--
Used to register JMX statistics in any available MBean server
-->
<jmxStatistics enabled="false" />
<!--
Used to enable invocation batching and allow the use of Cache.startBatch()/endBatch() methods.
-->
<clustering mode="replication">
<sync replTimeout="600000" />
<stateTransfer timeout="480000" fetchInMemoryState="true" />
</clustering>
<storeAsBinary enabled="true" />
</default>
<namedCache name="GLOBAL_SYNC_CACHE">
<transaction transactionMode="TRANSACTIONAL" transactionManagerLookupClass="com.suntecgroup.tbms.container.services.cluster.ContainerCacheTxManagerLookup" syncRollbackPhase="false" syncCommitPhase="true" useEagerLocking="true" lockingMode="PESSIMISTIC" />
</namedCache>
<namedCache name="GLOBAL_NONTX_SYNC_CACHE">
<transaction transactionMode="NON_TRANSACTIONAL" />
</namedCache>
</infinispan>
JGROUPS configuration..
<?xml version="1.0" encoding="UTF-8"?>
<config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups file:schema/JGroups-2.8.xsd">
<TCP bind_port="7800" loopback="true" recv_buf_size="20M" send_buf_size="640K" max_bundle_size="64000" max_bundle_timeout="30" enable_bundling="false" use_send_queues="true" sock_conn_timeout="300" tcp_nodelay="true" thread_pool.enabled="true" thread_pool.min_threads="1" thread_pool.max_threads="25" thread_pool.keep_alive_time="5000" thread_pool.queue_enabled="false" thread_pool.queue_max_size="100" thread_pool.rejection_policy="run" oob_thread_pool.enabled="true" oob_thread_pool.min_threads="1" oob_thread_pool.max_threads="8" oob_thread_pool.keep_alive_time="5000" oob_thread_pool.queue_enabled="false" oob_thread_pool.queue_max_size="100" oob_thread_pool.rejection_policy="run" enable_diagnostics="false" />
<!--MPING mcast_addr="232.1.2.13"
mcast_port="7500"
num_initial_members="2"
timeout="2000" /-->
<TCPPING timeout="3000" initial_hosts="${jgroups.tcpping.initial_hosts:localhost[7800],localhost[7801]}" port_range="0" num_initial_members="3" />
<MERGE2 max_interval="100000" min_interval="20000" />
<FD_SOCK />
<FD timeout="60000" max_tries="5" />
<VERIFY_SUSPECT timeout="30000" />
<BARRIER />
<pbcast.NAKACK use_mcast_xmit="false" exponential_backoff="500" discard_delivered_msgs="true" />
<UNICAST timeout="300,600,1200" />
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000" />
<pbcast.GMS print_local_addr="false" join_timeout="3000" view_bundling="true" />
FRAG2 frag_size="60000"
pbcast.STATE_TRANSFER
</config>
Current transaction was aborted (probably due to a timeout, but maybe as a consequence of delivery failure). You need to rollback current transaction and start new.
However let me note that 5.3 was released 2013/06/26 - you're using almost 7 years old version. If there is a bug, no-one will even try to check it out.

hive -e throws NoSuchMethodError when called from oozie sub-workflow using <shell>, runs perfectly when called from main workflow

I was refactoring a oozie workflow, all of the workflow in written in single file, was trying to break it down to subworkflows.
But after refactoring it started throwing
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.io.retry.RetryPolicies.retryForeverWithFixedSleep(JLjava/util/concurrent/TimeUnit;)Lorg/apache/hadoop/io/retry/RetryPolicy;
Original workflow:
<?xml version="1.0" encoding="UTF-8"?>
<workflow-app xmlns="uri:oozie:workflow:0.5" name="Main">
<start to="loadToHive"/>
<action name="loadToHive">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>yarn.nodemanager.container-executor.class</name>
<value>LinuxContainerExecutor</value>
</property>
<property>
<name>yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-user</name>
<value>true</value>
</property>
</configuration>
<exec>${loadToHiveActionScript}</exec>
<argument>${outPutPath}</argument>
<argument>${dataSetPath}</argument>
<argument>${hiveDB}</argument>
<env-var>HADOOP_USER_NAME=${wf:user()}</env-var>
<file>${loadToHiveActionScriptPath}#${loadToHiveActionScript}</file>
</shell>
<ok to="uplaodToMysql"/>
<error to="handleFailure"/>
</action>
Refactored file:
<?xml version="1.0" encoding="UTF-8"?>
<workflow-app xmlns="uri:oozie:workflow:0.5" name="Main">
<start to="loadToHive"/>
<action name="loadToHive">
<sub-workflow>
<app-path>${oozieProjectRoot}/commonWorkflows/mongoTransform.xml</app-path>
<propagate-configuration/>
</sub-workflow>
<ok to="uplaodToMysql"/>
<error to="handleFailure"/>
</action>
Subworkflow file:
<?xml version="1.0" encoding="UTF-8"?>
<workflow-app name="mongoTransform-${module}" xmlns="uri:oozie:workflow:0.5">
<start to="loadToHiveSub"/>
<action name="loadToHive">
<shell xmlns="uri:oozie:shell-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>yarn.nodemanager.container-executor.class</name>
<value>LinuxContainerExecutor</value>
</property>
<property>
<name>yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-user</name>
<value>true</value>
</property>
</configuration>
<exec>${loadToHiveActionScript}</exec>
<argument>${outPutPath}</argument>
<argument>${dataSetPath}</argument>
<argument>${hiveDB}</argument>
<env-var>HADOOP_USER_NAME=${wf:user()}</env-var>
<file>${loadToHiveActionScriptPath}#${loadToHiveActionScript}</file>
</shell>
<ok to="end"/>
<error to="handleFailure"/>
</action>
loadToHiveActionScript.sh
hive -e "Drop table if exists ${3}.${i}_intermediate";
...
hive -e " Alter table ${3}.${i}_intermediate RENAME TO ${3}.$i";
When this is run in the main workflow file, it executes perfectly fine.
Can this be issue of
env-var: HADOOP_USER_NAME=${wf:user()}

how to use dirSize() EL function in oozie decision node

I tried to use dirSize() in oozie decision node. but it does not work.
When I use ${fs:dirSize(InputDir) gt 10 * KB} in the decision node. The oozie workflow status moves to failed state and no error is displayed.
Please find below the code snippet from workflow.
<action name="L1_check" cred="hcat_creds">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<script>l1_check_code.sql</script>
<param>PI_DB=${PI_DB}</param>
<param>PRD_DB=${PRD_DB}</param>
<param>PROMO_START_DATE=${PROMO_START_DATE}</param>
<param>PROMO_END_DATE=${PROMO_END_DATE}</param>
</hive>
<ok to="decision-node" />
<error to="fail" />
</action>
<decision name="decision-node">
<switch>
<case to="L1_exists">
${fs:dirSize(InputDir) gt 10 * KB}
</case>
<default to="fail"/>
</switch>
</decision>
InputDir is a defined in the properties file and the path exists.
First action L1_check executes and transitioned to decision node. then workflow status changes to Failed state.
Is there any error in the way function is used? If yes, what should be right function to use?
Also is there a way to give a path with combination of parameter and string
eg: dirSize(InputDir/l1_check)
where InputDir is a parameter and l1_check is a static name

When the conection to the Oracle DB is dropped, PooledConnection has already been closed is printed continuously

I've installed wso2esb-4.9.0 and the master datasource is configured to a oracle DB. When the connection to the Oracle Database is dropped due to a network connectivity issue the following error is continually printed even though the connection is back again. Only after a restart the exception stops.
TID: [-1234] [] [2016-05-16 00:53:21,457] ERROR {org.wso2.carbon.registry.core.utils.RegistryUtils} - Failed to construct the connectionId. {org.wso2.carbon.registry.core.utils.RegistryUtils}
java.sql.SQLRecoverableException: Closed Connection
at oracle.jdbc.driver.PhysicalConnection.getMetaData(PhysicalConnection.java:3131)
at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.tomcat.jdbc.pool.ProxyConnection.invoke(ProxyConnection.java:126)
at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109)
at org.wso2.carbon.ndatasource.rdbms.ConnectionRollbackOnReturnInterceptor.invoke(ConnectionRollbackOnReturnInterceptor.java:51)
at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109)
at org.apache.tomcat.jdbc.pool.interceptor.AbstractCreateStatementInterceptor.invoke(AbstractCreateStatementInterceptor.java:71)
at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109)
at org.apache.tomcat.jdbc.pool.interceptor.ConnectionState.invoke(ConnectionState.java:153)
at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109)
at org.apache.tomcat.jdbc.pool.TrapException.invoke(TrapException.java:41)
at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109)
at org.apache.tomcat.jdbc.pool.DisposableConnectionFacade.invoke(DisposableConnectionFacade.java:80)
at com.sun.proxy.$Proxy14.getMetaData(Unknown Source)
at org.wso2.carbon.registry.core.utils.RegistryUtils.getConnectionId(RegistryUtils.java:194)
at org.wso2.carbon.registry.core.jdbc.dataaccess.JDBCDatabaseTransaction$ManagedRegistryConnection.getConnectionId(JDBCDatabaseTransaction.java:1133)
at org.wso2.carbon.registry.core.jdbc.dataaccess.JDBCDatabaseTransaction$ManagedRegistryConnection.rollback(JDBCDatabaseTransaction.java:1288)
at org.wso2.carbon.registry.core.jdbc.dataaccess.JDBCTransactionManager.rollbackTransaction(JDBCTransactionManager.java:120)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCLogsDAO.rollbackTransaction(JDBCLogsDAO.java:335)
at org.wso2.carbon.registry.core.jdbc.dao.JDBCLogsDAO.getLogList(JDBCLogsDAO.java:306)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.getLogs(EmbeddedRegistry.java:2332)
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.getLogs(CacheBackedRegistry.java:402)
at org.wso2.carbon.registry.core.session.UserRegistry.getLogsInternal(UserRegistry.java:1806)
at org.wso2.carbon.registry.core.session.UserRegistry.access$3600(UserRegistry.java:60)
at org.wso2.carbon.registry.core.session.UserRegistry$37.run(UserRegistry.java:1777)
at org.wso2.carbon.registry.core.session.UserRegistry$37.run(UserRegistry.java:1774)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.getLogs(UserRegistry.java:1774)
at org.wso2.carbon.registry.indexing.ResourceSubmitter.submitResource(ResourceSubmitter.java:119)
at org.wso2.carbon.registry.indexing.ResourceSubmitter.run(ResourceSubmitter.java:76)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
master-datasources.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?><datasources-configuration xmlns:svns="http://org.wso2.securevault/configuration">
<providers>
<provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider>
</providers>
<datasources>
<datasource>
<name>WSO2_CARBON_DB</name>
<description>The datasource used for registry and user manager</description>
<jndiConfig>
<name>jdbc/WSO2CarbonDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:oracle:thin:#xxx.xx.x.xxx:1528/dvdb</url>
<username>esbuser</username>
<password svns:secretAlias="Datasources.WSO2_CARBON_DB.Configuration.Password">password</password>
<driverClassName>oracle.jdbc.driver.OracleDriver</driverClassName>
<maxActive>80</maxActive>
<maxWait>60000</maxWait>
<minIdle>5</minIdle>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1 FROM DUAL</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
<datasource>
<name>WSO2_REG_DB</name>
<description>The datasource used for registry and user manager</description>
<jndiConfig>
<name>jdbc/WSO2REG</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:oracle:thin:#xxx.xx.x.xxx:1528/dvdb</url>
<username>reguser</username>
<password svns:secretAlias="Datasources.WSO2_REG_DB.Configuration.Password">password</password>
<driverClassName>oracle.jdbc.driver.OracleDriver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<minIdle>5</minIdle>
<validationQuery>SELECT 1 FROM DUAL</validationQuery>
<testWhileIdle>true</testWhileIdle>
<timeBetweenEvictionRunsMillis>1800000</timeBetweenEvictionRunsMillis>
<numTestsPerEvictionRun>5</numTestsPerEvictionRun>
<minEvictableIdleTimeMillis>3600000</minEvictableIdleTimeMillis>
</configuration>
</definition>
</datasource>
<!-- For an explanation of the properties, see: http://people.apache.org/~fhanik/jdbc-pool/jdbc-pool.html -->
<!--datasource>
<name>SAMPLE_DATA_SOURCE</name>
<jndiConfig>
<name></name>
<environment>
<property name="java.naming.factory.initial"></property>
<property name="java.naming.provider.url"></property>
</environment>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<defaultAutoCommit></defaultAutoCommit>
<defaultReadOnly></defaultReadOnly>
<defaultTransactionIsolation>NONE|READ_COMMITTED|READ_UNCOMMITTED|REPEATABLE_READ|SERIALIZABLE</defaultTransactionIsolation>
<defaultCatalog></defaultCatalog>
<username></username>
<password svns:secretAlias="WSO2.DB.Password"></password>
<maxActive></maxActive>
<maxIdle></maxIdle>
<initialSize></initialSize>
<maxWait></maxWait>
<dataSourceClassName>com.mysql.jdbc.jdbc2.optional.MysqlXADataSource</dataSourceClassName>
<dataSourceProps>
<property name="url">jdbc:mysql://localhost:3306/Test1</property>
<property name="user">root</property>
<property name="password">123</property>
</dataSourceProps>
</configuration>
</definition>
</datasource-->
</datasources>
repository/conf/registry.xml
<currentDBConfig>wso2registry</currentDBConfig>
<readOnly>false</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
<dbConfig name="wso2registry">
<dataSource>jdbc/WSO2CarbonDB</dataSource>
</dbConfig>
<dbConfig name="remoteRegistry">
<dataSource>jdbc/WSO2REG</dataSource>
</dbConfig>
<remoteInstance url="https://registryhost:9453/registry">
<id>9453</id>
<dbConfig>remoteRegistry</dbConfig>
<readOnly>false</readOnly>
<registryRoot>/</registryRoot>
</remoteInstance>
<mount path="/_system/config" overwrite="false">
<instanceId>9453</instanceId>
<targetPath>/_system/nodes</targetPath>
</mount>
<mount path="/_system/governance" overwrite="false">
<instanceId>9453</instanceId>
<targetPath>/_system/governance</targetPath>
</mount>
In my case the problem was caused by a wrong line deleted at file repository/config/registry.xml.
I'd removed this code:
<dbConfig name="wso2registry">
<dataSource>jdbc/WSO2CarbonDB</dataSource>
</dbConfig>
Instead of add another dbConfig tag.
For me, the correction was re-add that tag. Like this:
<wso2registry>
<!-- These are used to define the DB configuration and the basic parameters to be used for the registry -->
<currentDBConfig>wso2registry</currentDBConfig>
<readOnly>false</readOnly>
<enableCache>true</enableCache>
<registryRoot>/</registryRoot>
<!-- This defines the default database and its configuration of the registry -->
<dbConfig name="wso2registry">
<dataSource>jdbc/WSO2CarbonDB</dataSource>
</dbConfig>
<dbConfig name="sharedregistry">
<dataSource>jdbc/WSO2RegistryDB</dataSource>
</dbConfig>
...
...
</wso2registry>
Hope this post help you...
I had the same issue.
In the master-datasources.xml
1- set the datasource name same as the jndiConfig name
2- set 2 differents datasources with different names for both registry and user management.
Here my master-datasources.xml
<datasource>
<name>WSO2REG_DB</name>
<description>The datasource used for registry</description>
<jndiConfig>
<name>jdbc/WSO2REG_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/registry?autoReconnect=true&relaxAutoCommit=true&</url>
<username>apiuser</username>
<password>apimanager</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
<datasource>
<name>WSO2UM_DB</name>
<description>The datasource used for user management</description>
<jndiConfig>
<name>jdbc/WSO2UM_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/userstore?autoReconnect=true&relaxAutoCommit=true&
</url>
<username>apiuser</username>
<password>apimanager</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
In the registry.xml
1- set the wso2registry dbConfig with the user management data source
2- set the govregistry dbConfig with the registry data source
Here my registry.xml
<dbConfig name="wso2registry">
<dataSource>jdbc/WSO2UM_DB</dataSource>
</dbConfig>
<dbConfig name="govregistry">
<dataSource>jdbc/WSO2REG_DB</dataSource>
</dbConfig>
It worked for me and my IS-Server could start properly. Hope it will work for you too, when the problem isn't already fixed.

Oozie EL function: An exception occured trying to convert String to type "java.lang.Double"

I am trying to run an Oozie workflow that calls EL function replaceAll(). The action that's using the replaceAll() is this
<action name="createSuccess">
<fs>
<configuration>
<property>
<name>runDate</name>
<value>${replaceAll(hdfsDir, nameNode + '/(.+)/' + region + '/([0-9\\-]+)/?', '$2')}</value>
</property>
</configuration>
<mkdir path="${nameNode}/path/run/${region}/${runDate}"/>
<touchz path="${nameNode}/path/run/${region}/${runDate}/success.txt"/>
</fs>
<ok to="end"/>
<error to="sendEmailKill"/>
</action>
hdfsDir is something like hdfs://nameNode:8020/some/path/region/2015-04-22 and I need to grab that date at the end as a property and use it.
But when I run the above action, I got this exception:
javax.servlet.jsp.el.ELException: An exception occured trying to convert String "hdfs://nameNode:8020" to type "java.lang.Double"
at org.apache.commons.el.Logger.logError(Logger.java:481)
at org.apache.commons.el.Logger.logError(Logger.java:498)
at org.apache.commons.el.Logger.logError(Logger.java:566)
at org.apache.commons.el.Coercions.coerceToPrimitiveNumber(Coercions.java:440)
at org.apache.commons.el.Coercions.applyArithmeticOperator(Coercions.java:852)
at org.apache.commons.el.ArithmeticOperator.apply(ArithmeticOperator.java:83)
at org.apache.commons.el.BinaryOperatorExpression.evaluate(BinaryOperatorExpression.java:170)
at org.apache.commons.el.FunctionInvocation.evaluate(FunctionInvocation.java:163)
at org.apache.commons.el.ExpressionString.evaluate(ExpressionString.java:114)
at org.apache.commons.el.ExpressionEvaluatorImpl.evaluate(ExpressionEvaluatorImpl.java:274)
at org.apache.commons.el.ExpressionEvaluatorImpl.evaluate(ExpressionEvaluatorImpl.java:190)
at org.apache.oozie.util.ELEvaluator.evaluate(ELEvaluator.java:203)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:175)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:60)
at org.apache.oozie.command.XCommand.call(XCommand.java:280)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:326)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:255)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Any ideas why I'm getting this exception and how to fix it?
After trial an error, I figured out that I can not use "+" to concatenate two strings. I have to use this:
${replaceAll(hdfsDumpDir, concat(concat(concat(nameNode, '/(.+)/'), region), '/'), '')}