HornetQ 2.4.0 dont replicated binding journal files on backup-server - replication

At first, sorry for my English)
We have RHEL 6.6, HornetQ 2.4.0 with live&backup configuration and queue with messages from 2 to 11 megabytes.
When backup server starting live node initiates replication on backup, but this process not ending and not droped whit error.
For example
Log on live server about initiates replication
[2015-05-05 17:27:32,587] [INFO ] [Thread-2] [org.hornetq.core.server] HQ221025: Replication: sending JournalFileImpl: (hornetq-data-759.hq id = 192, recordID = 192) (size=10,485,760) to backup. NIOSequentialFile ../data/server0/data/messaging/journal/hornetq-data-759.hq
[2015-05-05 17:27:32,633] [INFO ] [Thread-2] [org.hornetq.core.server] HQ221025: Replication: sending JournalFileImpl: (hornetq-data-749.hq id = 755, recordID = 755) (size=10,485,760) to backup. NIOSequentialFile ../data/server0/data/messaging/journal/hornetq-data-749.hq
[2015-05-05 17:27:32,675] [INFO ] [Thread-2] [org.hornetq.core.server] HQ221025: Replication: sending JournalFileImpl: (hornetq-bindings-365.bindings id = 1, recordID = 1) (size=1,048,576) to backup. NIOSequentialFile ../data/server0/data/messaging/bindings/hornetq-bindings-365.bindings
[2015-05-05 17:27:32,686] [INFO ] [Thread-2] [org.hornetq.core.server] HQ221025: Replication: sending JournalFileImpl: (hornetq-bindings-369.bindings id = 2, recordID = 2) (size=1,048,576) to backup. NIOSequentialFile ../data/server0/data/messaging/bindings/hornetq-bindings-369.bindings
[2015-05-05 17:27:32,689] [INFO ] [Thread-2] [org.hornetq.core.server] HQ221025: Replication: sending JournalFileImpl: (hornetq-bindings-362.bindings id = 366, recordID = 366) (size=1,048,576) to backup. NIOSequentialFile ../data/server0/data/messaging/bindings/hornetq-bindings-362.bindings
Log pushing journal files on backup serever
[2015-05-05 18:56:15,558] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Creating file hornetq-data-190.hq
[2015-05-05 18:56:15,655] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Renaming file hornetq-data-190.hq.tmp as hornetq-data-190.hq
[2015-05-05 18:56:15,662] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Creating file hornetq-data-191.hq
[2015-05-05 18:56:15,722] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Renaming file hornetq-data-191.hq.tmp as hornetq-data-191.hq
[2015-05-05 18:56:15,723] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Creating file hornetq-data-774.hq
[2015-05-05 18:56:15,771] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Renaming file hornetq-data-774.hq.tmp as hornetq-data-774.hq
[2015-05-05 18:56:15,775] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Creating file hornetq-data-775.hq
[2015-05-05 18:56:15,826] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Renaming file hornetq-data-775.hq.tmp as hornetq-data-775.hq
[2015-05-05 18:56:15,826] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Creating file hornetq-data-776.hq
[2015-05-05 18:56:15,879] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Renaming file hornetq-data-776.hq.tmp as hornetq-data-776.hq
[2015-05-05 18:56:15,880] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] pushing openFile JournalFileImpl: (hornetq-data-776.hq id = 776, recordID = 776)
[2015-05-05 18:56:15,882] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Creating file hornetq-bindings-1.bindings
[2015-05-05 18:56:15,889] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Renaming file hornetq-bindings-1.bindings.tmp as hornetq-bindings-1.bindings
[2015-05-05 18:56:15,889] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Creating file hornetq-bindings-380.bindings
[2015-05-05 18:56:15,898] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Renaming file hornetq-bindings-380.bindings.tmp as hornetq-bindings-380.bindings
[2015-05-05 18:56:15,898] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Creating file hornetq-bindings-381.bindings
[2015-05-05 18:56:15,905] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Renaming file hornetq-bindings-381.bindings.tmp as hornetq-bindings-381.bindings
[2015-05-05 18:56:15,906] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Creating file hornetq-bindings-382.bindings
[2015-05-05 18:56:15,916] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] Renaming file hornetq-bindings-382.bindings.tmp as hornetq-bindings-382.bindings
[2015-05-05 18:56:15,917] [TRACE] [Thread-1 (HornetQ-client-netty-threads-1442326569)] [org.hornetq.journal] pushing openFile JournalFileImpl: (hornetq-bindings-382.bindings id = 382, recordID = 382)
And starting backup server
[2015-05-06 11:16:26,919] [INFO ] [main] [org.hornetq.integration.bootstrap] HQ101000: Starting HornetQ Server
[2015-05-06 11:16:27,880] [INFO ] [main] [org.hornetq.core.server] HQ221000: backup server is starting with configuration HornetQ Configuration (clustered=true,backup=true,sharedStore=false,journalDirectory=../data/server0/data/messaging/journal,bindingsDirectory=../data/server0/data/messaging/bindings,largeMessagesDirectory=../data/server0/data/messaging/largemessages,pagingDirectory=../data/server0/data/messaging/paging)
[2015-05-06 11:16:27,892] [WARN ] [HQ119000: Activation for server HornetQServerImpl::serverUUID=null] [org.hornetq.core.server] HQ222162: Moving data directory ../data/server0/data/messaging/bindings to ../data/server0/data/messaging/bindings12
[2015-05-06 11:16:27,893] [WARN ] [HQ119000: Activation for server HornetQServerImpl::serverUUID=null] [org.hornetq.core.server] HQ222162: Moving data directory ../data/server0/data/messaging/journal to ../data/server0/data/messaging/journal12
[2015-05-06 11:16:27,893] [WARN ] [HQ119000: Activation for server HornetQServerImpl::serverUUID=null] [org.hornetq.core.server] HQ222162: Moving data directory ../data/server0/data/messaging/paging to ../data/server0/data/messaging/paging12
[2015-05-06 11:16:27,893] [WARN ] [HQ119000: Activation for server HornetQServerImpl::serverUUID=null] [org.hornetq.core.server] HQ222162: Moving data directory ../data/server0/data/messaging/largemessages to ../data/server0/data/messaging/largemessages12
[2015-05-06 11:16:28,095] [INFO ] [HQ119000: Activation for server HornetQServerImpl::serverUUID=null] [org.hornetq.core.server] HQ221013: Using NIO Journal
[2015-05-06 11:16:28,140] [WARN ] [HQ119000: Activation for server HornetQServerImpl::serverUUID=null] [org.hornetq.core.server] HQ222007: Security risk! HornetQ is running with the default cluster admin user and default password. Please see the HornetQ user guide, cluster chapter, for instructions on how to change this.
[2015-05-06 11:16:28,166] [INFO ] [HQ119000: Activation for server HornetQServerImpl::serverUUID=null] [org.hornetq.core.server] HQ221043: Adding protocol support CORE
[2015-05-06 11:16:28,169] [INFO ] [HQ119000: Activation for server HornetQServerImpl::serverUUID=null] [org.hornetq.core.server] HQ221043: Adding protocol support STOMP
[2015-05-06 11:16:28,171] [INFO ] [HQ119000: Activation for server HornetQServerImpl::serverUUID=null] [org.hornetq.core.server] HQ221043: Adding protocol support AMQP
[2015-05-06 11:16:28,618] [INFO ] [HQ119000: Activation for server HornetQServerImpl::serverUUID=null] [org.hornetq.core.server] HQ221109: HornetQ Backup Server version 2.5.0.SNAPSHOT (Wild Hornet, 124) [null] started, waiting live to fail before it gets active
hornetq-configuration.xml
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">
<bindings-directory>${data.dir:../data}/server0/data/messaging/bindings</bindings-directory>
<journal-directory>${data.dir:../data}/server0/data/messaging/journal</journal-directory>
<large-messages-directory>${data.dir:../data}/server0/data/messaging/largemessages</large-messages-directory>
<paging-directory>${data.dir:../data}/server0/data/messaging/paging</paging-directory>
<shared-store>false</shared-store>
<backup-group-name>hub_group</backup-group-name>
<failover-on-shutdown>true</failover-on-shutdown>
<allow-failback>true</allow-failback>
<connection-ttl-override>100000</connection-ttl-override>
<check-for-live-server>true</check-for-live-server>
<security-enabled>false</security-enabled>
<journal-type>NIO</journal-type>
<!-- Connectors -->
<connectors>
<connector name="netty-connector">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>
<param key="port" value="5445"/>
<param key="host" value="host1"/>
<param key="tcp-send-buffer-size" value="524288"/>
<param key="tcp-receive-buffer-size" value="524288"/>
</connector>
</connectors>
<!-- Acceptors -->
<acceptors>
<acceptor name="netty-acceptor">
<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>
<param key="port" value="5445"/>
<param key="host" value="host1"/>
<param key="tcp-send-buffer-size" value="524288"/>
<param key="tcp-receive-buffer-size" value="524288"/>
</acceptor>
</acceptors>
<!-- Clustering configuration -->
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<group-address>${udp-address:231.7.7.9}</group-address>
<group-port>9876</group-port>
<broadcast-period>100</broadcast-period>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="my-discovery-group">
<group-address>${udp-address:231.7.7.9}</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<connector-ref>netty-connector</connector-ref>
<check-period>5000</check-period>
<connection-ttl>10000</connection-ttl>
<retry-interval>1000</retry-interval>
<reconnect-attempts>-1</reconnect-attempts>
<use-duplicate-detection>true</use-duplicate-detection>
<forward-when-no-consumers>true</forward-when-no-consumers>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="my-discovery-group"/>
</cluster-connection>
</cluster-connections>
</configuration>
hornetq-jms.xml
<configuration xmlns="urn:hornetq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">
<!--the connection factory used by the example-->
<connection-factory name="ConnectionFactory">
<connectors>
<connector-ref connector-name="netty-connector"/>
</connectors>
<entries>
<entry name="ConnectionFactory"/>
</entries>
<min-large-message-size>10240</min-large-message-size>
<connection-ttl>5000</connection-ttl>
<client-failure-check-period>5000</client-failure-check-period>
<retry-interval>1000</retry-interval>
<retry-interval-multiplier>1.5</retry-interval-multiplier>
<max-retry-interval>60000</max-retry-interval>
<reconnect-attempts>1000</reconnect-attempts>
<consumer-window-size>0</consumer-window-size>
</connection-factory>
<queue name="DLQ">
<entry name="/queue/DLQ"/>
</queue>
</configuration>

Related

Error occurs when call CuratorCache.start: Unable to read additional data from server sessionid

As the title says, something goes wrong when calling CuratorCache.start. The problem occurs in my project, so I create a small test project to reproduce it.
Env
jdk17(or jdk11)
spring-cloud-starter-zookeeper-all 3.1.0 (with curator-recipes 5.1.0) or curator-recipes 5.2.0
zookeeper 3.6.X or 3.7.0
MacOS Monterey or CentOS7.4
pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>org.example</groupId>
<artifactId>curator-test</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<maven.compiler.source>11</maven.compiler.source>
<maven.compiler.target>11</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zookeeper-all</artifactId>
<version>3.1.0</version>
<exclusions>
<exclusion>
<artifactId>curator-recipes</artifactId>
<groupId>org.apache.curator</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<version>5.2.0</version>
</dependency>
</dependencies>
</project>
Preparing data
add some String to zookeeper path: /test/1
Test code
var curator = CuratorFrameworkFactory.builder()
.connectString("localhost:2181")
.retryPolicy(new ExponentialBackoffRetry(1000, 3))
.build();
curator.start();
var bytes = curator.getData().forPath("/test/1");
System.out.println("value for path /test/1 is " + new String(bytes));
var curatorCache = CuratorCache.builder(curator, "/test").build();
curatorCache.listenable().addListener(
CuratorCacheListener.builder()
.forCreates(node -> System.out.println(String.format("Node created: [%s]", node)))
.forChanges((oldNode, node) -> System.out.println(String.format("Node changed. Old: [%s] New: [%s]", oldNode, node)))
.forDeletes(oldNode -> System.out.println(String.format("Node deleted. Old value: [%s]", oldNode)))
.forInitialized(() -> System.out.println("Cache initialized"))
.build()
);
curatorCache.start();
The test codes mostly comes from CuratorCache Example from official, as the test code shows, I can read data from the zookeeper path, but when I call CuratorCache.start the exception is thrown:
2022-01-23 16:07:53.578 INFO 55099 --- [ main] org.apache.zookeeper.ClientCnxnSocket : jute.maxbuffer value is 1048575 Bytes
2022-01-23 16:07:53.579 INFO 55099 --- [ main] org.apache.zookeeper.ClientCnxn : zookeeper.request.timeout value is 0. feature enabled=false
2022-01-23 16:07:53.580 INFO 55099 --- [ main] o.a.c.f.imps.CuratorFrameworkImpl : Default schema
2022-01-23 16:07:53.586 INFO 55099 --- [16.153.68:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server 172.16.153.68/172.16.153.68:2181.
2022-01-23 16:07:53.586 INFO 55099 --- [16.153.68:2181)] org.apache.zookeeper.ClientCnxn : SASL config status: Will not attempt to authenticate using SASL (unknown error)
2022-01-23 16:07:53.588 INFO 55099 --- [16.153.68:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established, initiating session, client: /192.168.195.34:49599, server: 172.16.153.68/172.16.153.68:2181
2022-01-23 16:07:53.639 INFO 55099 --- [16.153.68:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server 172.16.153.68/172.16.153.68:2181, session id = 0x1007127ce8c071d, negotiated timeout = 40000
2022-01-23 16:07:53.640 INFO 55099 --- [ain-EventThread] o.a.c.f.state.ConnectionStateManager : State change: CONNECTED
2022-01-23 16:07:53.644 INFO 55099 --- [ain-EventThread] o.a.c.framework.imps.EnsembleTracker : New config event received: {}
2022-01-23 16:07:53.644 INFO 55099 --- [ain-EventThread] o.a.c.framework.imps.EnsembleTracker : New config event received: {}
2022-01-23 16:07:53.829 WARN 55099 --- [ main] iguration$LoadBalancerCaffeineWarnLogger : Spring Cloud LoadBalancer is currently working with the default cache. You can switch to using Caffeine cache, by adding it and org.springframework.cache.caffeine.CaffeineCacheManager to the classpath.
2022-01-23 16:07:53.909 INFO 55099 --- [ main] org.test.Application : Started Application in 1.966 seconds (JVM running for 3.09)
2022-01-23 16:07:53.917 INFO 55099 --- [tor-Framework-0] o.a.c.f.imps.CuratorFrameworkImpl : backgroundOperationsLoop exiting
2022-01-23 16:07:53.929 WARN 55099 --- [16.153.68:2181)] org.apache.zookeeper.ClientCnxn : An exception was thrown while closing send thread for session 0x1007127ce8c071d.
org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable to read additional data from server sessionid 0x1007127ce8c071d, likely server has closed socket
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) ~[zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) ~[zookeeper-3.6.3.jar:3.6.3]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290) ~[zookeeper-3.6.3.jar:3.6.3]
2022-01-23 16:07:54.036 INFO 55099 --- [ain-EventThread] org.apache.zookeeper.ClientCnxn : EventThread shut down for session: 0x1007127ce8c071d
2022-01-23 16:07:54.036 INFO 55099 --- [ionShutdownHook] org.apache.zookeeper.ZooKeeper : Session: 0x1007127ce8c071d closed
So is there anybody has some idea, thanks for your comments.

Spring Cloud Config Client: Fetching config from wrong server

When I run my Spring Cloud Config Client project config-client, I found these error:
2018-02-09 10:31:09.885 INFO 13933 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at: http://localhost:8888
2018-02-09 10:31:10.022 WARN 13933 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Could not locate PropertySource: I/O error on GET request for "http://localhost:8888/config-client/dev/master": 拒绝连接 (Connection refused); nested exception is java.net.ConnectException: 拒绝连接 (Connection refused)
2018-02-09 10:31:10.026 INFO 13933 --- [ main] c.y.c.ConfigClientApplication : No active profile set, falling back to default profiles: default
2018-02-09 10:31:10.040 INFO 13933 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext#33b1c5c5: startup date [Fri Feb 09 10:31:10 CST 2018]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext#1ffe63b9
2018-02-09 10:31:10.419 INFO 13933 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=65226c2b-524f-3b14-8e17-9fdbc9f72d85
2018-02-09 10:31:10.471 INFO 13933 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$25380e89] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-02-09 10:31:10.688 INFO 13933 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 10001 (http)
2018-02-09 10:31:10.697 INFO 13933 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2018-02-09 10:31:10.698 INFO 13933 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.27
2018-02-09 10:31:10.767 INFO 13933 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2018-02-09 10:31:10.768 INFO 13933 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 727 ms
2018-02-09 10:31:10.861 INFO 13933 --- [ost-startStop-1] o.s.b.w.servlet.ServletRegistrationBean : Mapping servlet: 'dispatcherServlet' to [/]
2018-02-09 10:31:10.864 INFO 13933 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
2018-02-09 10:31:10.864 INFO 13933 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2018-02-09 10:31:10.864 INFO 13933 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'httpPutFormContentFilter' to: [/*]
2018-02-09 10:31:10.865 INFO 13933 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'requestContextFilter' to: [/*]
2018-02-09 10:31:10.895 WARN 13933 --- [ main] ationConfigEmbeddedWebApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'configClientApplication': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'content' in value "${content}"
2018-02-09 10:31:10.896 INFO 13933 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2018-02-09 10:31:10.914 INFO 13933 --- [ main] utoConfigurationReportLoggingInitializer :
Error starting ApplicationContext. To display the auto-configuration report re-run your application with 'debug' enabled.
2018-02-09 10:31:10.923 ERROR 13933 --- [ main] o.s.boot.SpringApplication : Application startup failed
Apparently, the config server is wrong. However, the Spring Cloud Config Server is running at localhost:10000/ and application.yml of the project(config-client) is below. Why the spring.cloud.config.uri doesn't work?
application.yml [config-client]
server:
port: 10001
spring:
application:
name: config-client
cloud:
config:
label: master
profile: dev
uri: http://localhost:10000
Fur future readers, as answered here, when using Spring Cloud Config Server, we should specify basic bootstrap settings such as : spring.application.name and spring.cloud.config.uri inside bootstrap.yml (or "bootstrap.properties").
Upon startup, Spring Cloud makes an HTTP call to the config server with the name of the application and retrieves back that application's configuration.
That's said, since we're externalizing our settings using Spring Cloud Config Server, any default configurations defined in application.yml (or "application.properties") will be overridden during the bootstrap process upon startup.
IntelliJ Users: add the following override parameter in the run/Debug Configuration:
Name: spring.cloud.config.uri
Value: http://your-server-here/config-server
you can load configuration servers before starting the Application, using bootstrap.yml
just add configuration server and application name
spring:
application:
name: clientTest
cloud:
config:
uri: http://localhost:8889
enabled: true
fail-fast: true
if we are using bootstrap.properties. we have to include this dependency in pom for spring-2.4.0+
agregado para evitar un error al usar spring mayor que 2.4.0
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
</dependency>
in my case i was testing spring consul, which usually runs in 8500, i saw a different port in the log. Found that the different port is due to following deplendency of spring cloud. Hence i just have to remove it.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>

Failed to wait for initial partition map exchange

After change apache Ignite 2.0 to 2.1, I got below warning.
2017-08-17 10:44:21.699 WARN 10884 --- [ main] .i.p.c.GridCachePartitionExchangeManager : Failed to wait for initial partition map exchange. Possible reasons are:
I use third party persistence cache store.
when I remove cacheStore configuration, I didn't got warning. work fine.
Using cacheStore and changing down version 2.1 to 2.0, I didn't got warning. work fine.
Is there significant change in 2.1?
here is my full framework stack.
- spring boot 1.5.6
- spring data jpa
- apache ignite 2.1.0
here is my full configuration in java code.(I use embedded ignite in spring)
I use partitioned cache, write behind cache to rdbms storage using spring data jpa.
IgniteConfiguration igniteConfig = new IgniteConfiguration();
CacheConfiguration<Long, Object> cacheConfig = new CacheConfiguration<>();
cacheConfig.setCopyOnRead(false); //for better performance
cacheConfig
.setWriteThrough(true)
.setWriteBehindEnabled(true)
.setWriteBehindBatchSize(1024)
.setWriteBehindFlushFrequency(10000)
.setWriteBehindCoalescing(true)
.setCacheStoreFactory(new CacheStoreImpl()); //CacheStoreImpl use spring data jpa internally
cacheConfig.setName('myService');
cacheConfig.setCacheMode(CacheMode.PARTITIONED);
cacheConfig.setBackups(2);
cacheConfig.setWriteSynchronizationMode(FULL_ASYNC);
cacheConfig.setNearConfiguration(new NearCacheConfiguration<>());//use default configuration
igniteConfig.setCacheConfiguration(cacheConfig);
igniteConfig.setMemoryConfiguration(new MemoryConfiguration()
.setPageSize(8 * 1024)
.setMemoryPolicies(new MemoryPolicyConfiguration()
.setInitialSize((long) 256L * 1024L * 1024L)
.setMaxSize((long) 1024L * 1024L * 1024L)));
Ignite ignite = IgniteSpring.start(igniteConfig, springApplicationCtx);
ignite.active(true);
here is my full log using -DIGNITE_QUITE=false
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Config URL: n/a
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Daemon mode: off
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : OS: Windows 10 10.0 amd64
2017-08-18 11:54:52.587 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : OS user: user
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : PID: 684
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Language runtime: Java Platform API Specification ver. 1.8
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : VM information: Java(TM) SE Runtime Environment 1.8.0_131-b11 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.131-b11
2017-08-18 11:54:52.588 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : VM total memory: 1.9GB
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Remote Management [restart: off, REST: on, JMX (remote: on, port: 58771, auth: off, ssl: off)]
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : IGNITE_HOME=null
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : VM arguments: [-Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.port=58771, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Djava.rmi.server.hostname=localhost, -Dspring.liveBeansView.mbeanDomain, -Dspring.application.admin.enabled=true, -Dspring.profiles.active=rdbms,multicastIp, -Dapi.port=10010, -Xmx2g, -Xms2g, -DIGNITE_QUIET=false, -Dfile.encoding=UTF-8, -Xbootclasspath:C:\Program Files\Java\jre1.8.0_131\lib\resources.jar;C:\Program Files\Java\jre1.8.0_131\lib\rt.jar;C:\Program Files\Java\jre1.8.0_131\lib\jsse.jar;C:\Program Files\Java\jre1.8.0_131\lib\jce.jar;C:\Program Files\Java\jre1.8.0_131\lib\charsets.jar;C:\Program Files\Java\jre1.8.0_131\lib\jfr.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\cldrdata.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\dnsns.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\jaccess.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\jfxrt.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\localedata.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\nashorn.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunec.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunmscapi.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jre1.8.0_131\lib\ext\zipfs.jar]
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : System cache's MemoryPolicy size is configured to 40 MB. Use MemoryConfiguration.systemCacheMemorySize property to change the setting.
2017-08-18 11:54:52.589 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Configured caches [in 'sysMemPlc' memoryPolicy: ['ignite-sys-cache'], in 'default' memoryPolicy: ['myCache']]
2017-08-18 11:54:52.592 WARN 684 --- [ pub-#11%null%] o.apache.ignite.internal.GridDiagnostic : This operating system has been tested less rigorously: Windows 10 10.0 amd64. Our team will appreciate the feedback if you experience any problems running ignite in this environment.
2017-08-18 11:54:52.657 INFO 684 --- [ main] o.a.i.i.p.plugin.IgnitePluginProcessor : Configured plugins:
2017-08-18 11:54:52.657 INFO 684 --- [ main] o.a.i.i.p.plugin.IgnitePluginProcessor : ^-- None
2017-08-18 11:54:52.657 INFO 684 --- [ main] o.a.i.i.p.plugin.IgnitePluginProcessor :
2017-08-18 11:54:52.724 INFO 684 --- [ main] o.a.i.s.c.tcp.TcpCommunicationSpi : Successfully bound communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0, selectorsCnt=4, selectorSpins=0, pairedConn=false]
2017-08-18 11:54:52.772 WARN 684 --- [ main] o.a.i.s.c.tcp.TcpCommunicationSpi : Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
2017-08-18 11:54:52.787 WARN 684 --- [ main] o.a.i.s.c.noop.NoopCheckpointSpi : Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
2017-08-18 11:54:52.811 WARN 684 --- [ main] o.a.i.i.m.c.GridCollisionManager : Collision resolution is disabled (all jobs will be activated upon arrival).
2017-08-18 11:54:52.812 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Security status [authentication=off, tls/ssl=off]
2017-08-18 11:54:53.087 INFO 684 --- [ main] o.a.i.i.p.odbc.SqlListenerProcessor : SQL connector processor has started on TCP port 10800
2017-08-18 11:54:53.157 INFO 684 --- [ main] o.a.i.i.p.r.p.tcp.GridTcpRestProtocol : Command protocol successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
2017-08-18 11:54:53.373 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Non-loopback local IPs: 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831, fe80:0:0:0:159d:5c82:b4ca:7630%eth2, fe80:0:0:0:30a3:1c57:3f57:4831%net0, fe80:0:0:0:3857:b492:48ad:1dc%eth4
2017-08-18 11:54:53.373 INFO 684 --- [ main] org.apache.ignite.internal.IgniteKernal : Enabled local MACs: 00000000000000E0, 0A0027000004, BCEE7B8B7C00
2017-08-18 11:54:53.404 INFO 684 --- [ main] o.a.i.spi.discovery.tcp.TcpDiscoverySpi : Successfully bound to TCP port [port=47500, localHost=0.0.0.0/0.0.0.0, locNodeId=7d90a0ac-b620-436f-b31c-b538a04b0919]
2017-08-18 11:54:53.409 WARN 684 --- [ main] .s.d.t.i.m.TcpDiscoveryMulticastIpFinder : TcpDiscoveryMulticastIpFinder has no pre-configured addresses (it is recommended in production to specify at least one address in TcpDiscoveryMulticastIpFinder.getAddresses() configuration property)
2017-08-18 11:54:55.068 INFO 684 --- [orker-#34%null%] o.apache.ignite.internal.exchange.time : Started exchange init [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=0], crd=true, evt=10, node=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], evtNode=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], customEvt=null]
2017-08-18 11:54:55.302 INFO 684 --- [orker-#34%null%] o.a.i.i.p.cache.GridCacheProcessor : Started cache [name=ignite-sys-cache, memoryPolicyName=sysMemPlc, mode=REPLICATED, atomicity=TRANSACTIONAL]
2017-08-18 11:55:15.066 WARN 684 --- [ main] .i.p.c.GridCachePartitionExchangeManager : Failed to wait for initial partition map exchange. Possible reasons are:
^-- Transactions in deadlock.
^-- Long running transactions (ignore if this is the case).
^-- Unreleased explicit locks.
2017-08-18 11:55:35.070 WARN 684 --- [ main] .i.p.c.GridCachePartitionExchangeManager : Still waiting for initial partition map exchange [fut=GridDhtPartitionsExchangeFuture [dummy=false, forcePreload=false, reassign=false, discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], topVer=1, nodeId8=7d90a0ac, msg=null, type=NODE_JOINED, tstamp=1503024895045], crd=TcpDiscoveryNode [id=7d90a0ac-b620-436f-b31c-b538a04b0919, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.183.206, 192.168.56.1, 2001:0:9d38:6abd:30a3:1c57:3f57:4831], sockAddrs=[/192.168.183.206:47500, DESKTOP-MDB6VIL/192.168.56.1:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500, /2001:0:9d38:6abd:30a3:1c57:3f57:4831:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1503024893396, loc=true, ver=2.1.0#20170721-sha1:a6ca5c8a, isClient=false], exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=0], nodeId=7d90a0ac, evt=NODE_JOINED], added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=1821989981], init=false, lastVer=null, partReleaseFut=null, exchActions=null, affChangeMsg=null, skipPreload=false, clientOnlyExchange=false, initTs=1503024895057, centralizedAff=false, changeGlobalStateE=null, forcedRebFut=null, done=false, evtLatch=0, remaining=[], super=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=733156437]]]
I debug my code, I guess IgniteSpring cannot inject SpringResource
#SpringResource(resourceClass = RdbmsCachePersistenceRepository.class)
private RdbmsCachePersistenceRepository repository;
#SpringResource(resourceClass = RdbmsCachePersistenceRepository.class)
private CacheObjectFactory cacheObjectFactory;
repository, cacheObjectFactory is same instance like below code
public interface RdbmsCachePersistenceRepository extends
JpaRepository<RdbmsCachePersistence, Long>,
CachePersistenceRepository<RdbmsCachePersistence>,
CacheObjectFactory {
#Override
default CachePersistence createCacheObject(long key, Object value, int partition) {
return new RdbmsCachePersistence(key, value, partition);
}
}
And RdbmsCachePersistenceRepository implemented by spring data jpa
when I debug code line by line, IgniteContext cannot bring RdbmsCachePersistenceRepository
I don't know why it is
I resolve this problem, but I don't know why it is resolved.
I added this dummy code before IgniteSpring.start.
springApplicationCtx.getBean(RdbmsCachePersistenceRepository.class);
I think the spring resource bean not initialized when the ignite context get the bean.

jboss 7, what is the subsystem xmlns for the jar in jboss-as-7.1.0.Final\modules\com

I am following the example at http://www.mastertheboss.com/jboss-web/jbosswebserver/using-web-valves-with-jboss-7 to work with valves. Here they have placed the jar file in jboss-as-7.1.0.Final\modules\org folder and in standalone.xml they have given 'subsystem xmlns="urn:jboss:domain:web:1.4" ...'
Now the valve which I am working, I need to put the jar files in jboss-as-7.1.0.Final\modules\com, but in standalone.xml when I give ' subsystem xmlns="urn:jboss:domain:web:1.4" ... ' , jboss is not even starting and giving the following error.
Listening for transport dt_socket at address: 1044
10:32:51,209 INFO [org.jboss.modules] JBoss Modules version 1.1.1.GA
10:32:51,506 INFO [org.jboss.msc] JBoss MSC version 1.0.2.GA
10:32:51,553 INFO [org.jboss.as] JBAS015899: JBoss AS 7.1.0.Final "Thunder" starting
10:32:52,578 ERROR [org.jboss.as.controller] JBAS014601: Error booting the container: java.lang.RuntimeException: org.jboss.as.controller.persistence.ConfigurationPersistenceException: JBAS014676: Failed to parse configu
ration
at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:161) [jboss-as-controller-7.1.0.Final.jar:7.1.0.Final]
at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_45]
Caused by: org.jboss.as.controller.persistence.ConfigurationPersistenceException: JBAS014676: Failed to parse configuration
at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:125) [jboss-as-controller-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.controller.AbstractControllerService.boot(AbstractControllerService.java:187) [jboss-as-controller-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.server.ServerService.boot(ServerService.java:261) [jboss-as-server-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:155) [jboss-as-controller-7.1.0.Final.jar:7.1.0.Final]
... 1 more
Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[118,8]
Message: Unexpected element '{urn:jboss:domain:web:1.4}subsystem'
at org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:108) [staxmapper-1.1.0.Final.jar:1.1.0.Final]
at org.jboss.staxmapper.XMLExtendedStreamReaderImpl.handleAny(XMLExtendedStreamReaderImpl.java:69) [staxmapper-1.1.0.Final.jar:1.1.0.Final]
at org.jboss.as.server.parsing.StandaloneXml.parseServerProfile(StandaloneXml.java:893) [jboss-as-server-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.server.parsing.StandaloneXml.readServerElement_1_1(StandaloneXml.java:329) [jboss-as-server-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.server.parsing.StandaloneXml.readElement(StandaloneXml.java:126) [jboss-as-server-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.server.parsing.StandaloneXml.readElement(StandaloneXml.java:100) [jboss-as-server-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:110) [staxmapper-1.1.0.Final.jar:1.1.0.Final]
at org.jboss.staxmapper.XMLMapperImpl.parseDocument(XMLMapperImpl.java:69) [staxmapper-1.1.0.Final.jar:1.1.0.Final]
at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:117) [jboss-as-controller-7.1.0.Final.jar:7.1.0.Final]
... 4 more
What is the 'subsystem xmlns ' I need to give for the jars we put in jboss-as-7.1.0.Final\modules\com directory?
If I change to urn:jboss:domain:web:1.1, I am getting the following error :
Listening for transport dt_socket at address: 1044
13:02:02,752 INFO [org.jboss.modules] JBoss Modules version 1.1.1.GA
13:02:03,049 INFO [org.jboss.msc] JBoss MSC version 1.0.2.GA
13:02:03,111 INFO [org.jboss.as] JBAS015899: JBoss AS 7.1.0.Final "Thunder" starting
13:02:04,232 ERROR [org.jboss.as.controller] JBAS014601: Error booting the container: java.lang.RuntimeException: org.jboss.as.controller.persistence.ConfigurationPersistenceException: JBAS014676: Failed to parse configu
ration
at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:161) [jboss-as-controller-7.1.0.Final.jar:7.1.0.Final]
at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_45]
Caused by: org.jboss.as.controller.persistence.ConfigurationPersistenceException: JBAS014676: Failed to parse configuration
at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:125) [jboss-as-controller-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.controller.AbstractControllerService.boot(AbstractControllerService.java:187) [jboss-as-controller-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.server.ServerService.boot(ServerService.java:261) [jboss-as-server-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:155) [jboss-as-controller-7.1.0.Final.jar:7.1.0.Final]
... 1 more
Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[240,4]
Message: JBAS014789: Unexpected element '{urn:jboss:domain:web:1.1}valve' encountered
at org.jboss.as.controller.parsing.ParseUtils.unexpectedElement(ParseUtils.java:85) [jboss-as-controller-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.web.WebSubsystemParser.readElement(WebSubsystemParser.java:396)
at org.jboss.as.web.WebSubsystemParser.readElement(WebSubsystemParser.java:60)
at org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:110) [staxmapper-1.1.0.Final.jar:1.1.0.Final]
at org.jboss.staxmapper.XMLExtendedStreamReaderImpl.handleAny(XMLExtendedStreamReaderImpl.java:69) [staxmapper-1.1.0.Final.jar:1.1.0.Final]
at org.jboss.as.server.parsing.StandaloneXml.parseServerProfile(StandaloneXml.java:893) [jboss-as-server-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.server.parsing.StandaloneXml.readServerElement_1_1(StandaloneXml.java:329) [jboss-as-server-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.server.parsing.StandaloneXml.readElement(StandaloneXml.java:126) [jboss-as-server-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.as.server.parsing.StandaloneXml.readElement(StandaloneXml.java:100) [jboss-as-server-7.1.0.Final.jar:7.1.0.Final]
at org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:110) [staxmapper-1.1.0.Final.jar:1.1.0.Final]
at org.jboss.staxmapper.XMLMapperImpl.parseDocument(XMLMapperImpl.java:69) [staxmapper-1.1.0.Final.jar:1.1.0.Final]
at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:117) [jboss-as-controller-7.1.0.Final.jar:7.1.0.Final]
... 4 more
The following are my standalone.xml subsystems for this:
<subsystem xmlns="urn:jboss:domain:deployment-scanner:1.1">
<deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000"/>
<deployment-scanner name="myShipINFO" path="D:\msi_git_workspace\MSI\msi" scan-interval="5000"/>
<deployment-scanner name="jamon" path="D:\jamonAPI" scan-interval="5000"/>
</subsystem>
<subsystem xmlns="urn:jboss:domain:web:1.1" native="false" default-virtual-server="default-host">
<configuration>
<jsp-configuration development="true"/>
</configuration>
<connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/>
<valve class-name="com.jamonapi.http.JAMonTomcatValve">
</valve>
<virtual-server name="default-host" enable-welcome-root="false">
<alias name="localhost"/>
<alias name="example.com"/>
</virtual-server>
</subsystem>
The following is my module.xml
<module xmlns="urn:jboss:module:1.1" name="com.jamonapi.http">
<properties>
<property name="jboss.api" value="private"/>
</properties>
<resources>
<resource-root path="jamon-2.79.jar"/>
</resources>
</module>
And the directory structure for my module.xml location is :
D:\jboss-as-7.1.0.Final\modules\com\jamonapi\http\main
In example Jboss 7.2 is used. In that case urn:jboss:domain:web:1.4 is fine But with Jboss7.1.0 you need to use urn:jboss:domain:web:1.0
Change urn:jboss:domain:web:1.4 to urn:jboss:domain:web:1.0. It should work then.
Updated:
You can check schema definition in {JBOSS_HOME}\docs\schema
Valve is not a valid attribute till urn:jboss:domain:web:1.2

Mule ESB 3.3 - Receiving IMAPS mail (Gmail)

Anyone has a working example of reading mails from IMAP over SSL (IMAPS) from Gmail?
Some info I have gathered, but without any success:
Mule ESB: Retrieving email messages from Gmail using IMAP connector
IMAP Questions
Mule ESB IMAP questions
and of course the infamous documentation
The thing just sits there doing nothing.
Here is my flow:
<mule>
<imaps:connector
name="imapsConnector"
checkFrequency="5000"
backupEnabled="true"
mailboxFolder="INBOX"
deleteReadMessages="false"
doc:name="IMAP">
<imaps:tls-client />
<imaps:tls-trust-store />
</imaps:connector>
<expression-transformer
name="returnAttachments"
doc:name="Expression">
<return-argument
evaluator="attachments-list"
expression="*.csv" />
</expression-transformer>
<flow
name="GmailImapsFetch"
doc:name="Flow1_IMAP_fetch">
<imaps:inbound-endpoint
user="your_username%40gmail.com"
password="your_password"
host="imap.googlemail.com"
port="993"
transformer-refs="returnAttachments"
disableTransportTransformer="true"
doc:name="IMAP"
connector-ref="imapsConnector"
responseTimeout="10000" />
<!-- <collection-splitter doc:name="Collection Splitter" /> -->
<logger message="#[payload]" />
<file:outbound-endpoint
path="/tmp/gmail-#[function:datestamp].dat"
doc:name="File">
<expression-transformer>
<return-argument
expression="payload.inputStream"
evaluator="groovy" />
</expression-transformer>
</file:outbound-endpoint>
</flow>
</mule>
Mule Studio (1.3.2) complains that the XML is malformed (it doesn't like the expression-transformer thingie), but it doesn't complain at runtime.
Anyone has this running?
Thanks.
--
Log:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Starting app 'mulelab' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[23-01-13 11:07:01] [DEBUG] Applying lifecycle phase: org.mule.lifecycle.phases.MuleContextStartPhase#14b03ea for registry: DefaultRegistryBroker
[23-01-13 11:07:01] [DEBUG] lifecycle phase: start for object: org.mule.util.queue.TransactionalQueueManager#63edf84f
[23-01-13 11:07:01] [ INFO] Starting ResourceManager
[23-01-13 11:07:01] [DEBUG] Restore retrieved 0 objects
[23-01-13 11:07:01] [DEBUG] Restore retrieved 0 objects
[23-01-13 11:07:01] [DEBUG] Restore retrieved 0 objects
[23-01-13 11:07:01] [ INFO] Started ResourceManager
[23-01-13 11:07:01] [DEBUG] lifecycle phase: start for object: FileConnector
. . .
[23-01-13 11:07:01] [DEBUG] lifecycle phase: start for object: ImapsConnector
{
name=imapsConnector
lifecycle=initialise
this=4bb4df9c
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=false
supportedProtocols=[imaps]
serviceOverrides=<none>
}
[23-01-13 11:07:01] [DEBUG] Connecting: ImapsConnector
{
name=imapsConnector
lifecycle=initialise
this=4bb4df9c
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=false
supportedProtocols=[imaps]
serviceOverrides=<none>
}
[23-01-13 11:07:01] [ INFO] Connected: ImapsConnector
{
name=imapsConnector
lifecycle=initialise
this=4bb4df9c
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[imaps]
serviceOverrides=<none>
}
[23-01-13 11:07:01] [ INFO] Starting: ImapsConnector
{
name=imapsConnector
lifecycle=initialise
this=4bb4df9c
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[imaps]
serviceOverrides=<none>
}
[23-01-13 11:07:01] [ INFO] Starting connector: imapsConnector
[23-01-13 11:07:01] [DEBUG] Successfully connected to ImapsConnector
{
name=imapsConnector
lifecycle=initialise
this=4bb4df9c
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=false
supportedProtocols=[imaps]
serviceOverrides=<none>
}
[23-01-13 11:07:01] [DEBUG] lifecycle phase: start for object: org.mule.transport.servlet.jetty.JettyWebappServerAgent#26945b95
[23-01-13 11:07:01] [DEBUG] lifecycle phase: start for object: org.mule.module.management.agent.JmxAgent#320f6398
[23-01-13 11:07:01] [DEBUG] lifecycle phase: start for object: SedaModel{_muleSystemModel}
[23-01-13 11:07:01] [ INFO] Starting model: _muleSystemModel
[23-01-13 11:07:01] [DEBUG] lifecycle phase: start for object: Flow{GmailImapsFetch}
[23-01-13 11:07:01] [ INFO] Starting flow: GmailImapsFetch
[23-01-13 11:07:01] [ INFO] Starting service: GmailImapsFetch.stage1
[23-01-13 11:07:01] [ INFO] Registering listener: GmailImapsFetch on endpointUri: imaps://your_username%40gmail.com:****#imap.googlemail.com:993
[23-01-13 11:07:01] [ INFO] Loading default inbound transformer: org.mule.transport.email.transformers.EmailMessageToString
[23-01-13 11:07:01] [DEBUG] Setting transformer name to: EmailMessageToString#1868577756
[23-01-13 11:07:01] [ INFO] Initialising: 'null'. Object is: RetrieveMessageReceiver
[23-01-13 11:07:01] [DEBUG] Connecting: RetrieveMessageReceiver{this=22fe135d, receiverKey=your_username#gmail.com, endpoint=imaps://your_username%40gmail.com:****#imap.googlemail.com:993}
[23-01-13 11:07:01] [ INFO] Connecting clusterizable message receiver
[23-01-13 11:07:01] [DEBUG] No Authenticator set on connector: imapsConnector; using default.
[23-01-13 11:07:01] [ INFO] Defaulting mule.email.imaps trust store to client Key Store
[23-01-13 11:07:01] [DEBUG] MuleSession local properties =============
[23-01-13 11:07:01] [DEBUG] mail.imaps.ssl: true
[23-01-13 11:07:01] [DEBUG] mail.debug: true
[23-01-13 11:07:01] [DEBUG] mail.imaps.socketFactory.class: org.mule.transport.email.ImapsSocketFactory
[23-01-13 11:07:01] [DEBUG] mail.imaps.socketFactory.fallback: false
[23-01-13 11:07:01] [DEBUG] mail.imap.host: imap.googlemail.com
[23-01-13 11:07:01] [DEBUG] mail.imap.auth: true
[23-01-13 11:07:01] [DEBUG] mail.imap.socketFactory.port: 993
[23-01-13 11:07:01] [DEBUG] mail.imap.rsetbeforequit: true
[23-01-13 11:07:01] [DEBUG] skipped 0
[23-01-13 11:07:01] [DEBUG] System global properties =============
[23-01-13 11:07:01] [DEBUG] mule.home: /home/pakmans/workspace/.mule
[23-01-13 11:07:01] [DEBUG] mule.encoding: UTF-8
[23-01-13 11:07:01] [DEBUG] skipped 57
[23-01-13 11:07:01] [DEBUG] Creating mail session: host = imap.googlemail.com, port = 993, user = your_username#gmail.com, pass = ********
[23-01-13 11:07:01] [DEBUG] creating: true; mule.email.imaps
[23-01-13 11:07:01] [DEBUG] creating factory
[23-01-13 11:07:01] [ INFO] Using org.mule.api.security.provider.SunSecurityProviderInfo
[23-01-13 11:07:01] [DEBUG] mule.email.imaps.ssl.trustStore -> null
[23-01-13 11:07:01] [DEBUG] mule.email.imaps.ssl.trustStoreType -> jks
[23-01-13 11:07:01] [DEBUG] mule.email.imaps.ssl.trustStorePassword -> null
[23-01-13 11:07:01] [DEBUG] mule.email.imaps.ssl.trustManagerAlgorithm -> SunX509
[23-01-13 11:07:01] [DEBUG] mule.email.imaps.ssl.keyStore -> .keystore
[23-01-13 11:07:01] [DEBUG] Unable to load resource from the file system: /home/pakmans/workspace/mulelab/.keystore
[23-01-13 11:07:01] [DEBUG] Unable to load resource .keystore from the classpath
[23-01-13 11:07:01] [DEBUG] Normalised keyStore path to: null
[23-01-13 11:07:01] [DEBUG] mule.email.imaps.ssl.keyStoreType -> jks
[23-01-13 11:07:01] [DEBUG] mule.email.imaps.ssl.keyStorePassword -> null
[23-01-13 11:07:01] [DEBUG] initialising: anon true
[23-01-13 11:07:01] [ INFO] Defaulting mule.email.imaps trust store to client Key Store
[23-01-13 11:07:03] [DEBUG] Connected: imaps://your_username%40gmail.com:****#imap.googlemail.com:993
[23-01-13 11:07:03] [ INFO] Starting: 'null'. Object is: RetrieveMessageReceiver
[23-01-13 11:07:03] [ INFO] Starting clusterizable message receiver
[23-01-13 11:07:03] [DEBUG] RetrieveMessageReceiver#22fe135d scheduled ScheduledThreadPoolExecutor$ScheduledFutureTask#6fa37fac with 5000 MILLISECONDS polling frequency
[23-01-13 11:07:03] [DEBUG] lifecycle phase: start for object: DefaultInboundEndpoint{endpointUri=imaps://your_username%40gmail.com:<password>#imap.googlemail.com, connector=ImapsConnector
{
name=imapsConnector
lifecycle=start
this=4bb4df9c
numberOfConcurrentTransactedReceivers=4
createMultipleTransactedReceivers=true
connected=true
supportedProtocols=[imaps]
serviceOverrides=<none>
}
, name='endpoint.imaps.your_username.gmail.com', mep=ONE_WAY, properties={}, transactionConfig=Transaction{factory=null, action=INDIFFERENT, timeout=0}, deleteUnacceptedMessages=false, initialState=started, responseTimeout=10000, endpointEncoding=UTF-8, disableTransportTransformer=true}
[23-01-13 11:07:03] [DEBUG] lifecycle phase: start for object: org.mule.DefaultMuleContext#6a9effe0
[23-01-13 11:07:03] [ INFO] Reload interval: 3000
[23-01-13 11:07:03] [DEBUG] org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'wrapper-manager' is defined
[23-01-13 11:07:03] [DEBUG] registering key/object wrapper-manager/org.mule.module.management.agent.WrapperManagerAgent#4c825cf3
[23-01-13 11:07:03] [DEBUG] applying processors
[23-01-13 11:07:03] [DEBUG] applying lifecycle to object: org.mule.module.management.agent.WrapperManagerAgent#4c825cf3
[23-01-13 11:07:03] [ INFO] This JVM hasn't been launched by the wrapper, the agent will not run.
[23-01-13 11:07:03] [DEBUG] Registering statistics with name: Mule.mulelab:type=Statistics,name=AllStatistics
[23-01-13 11:07:03] [DEBUG] Registering mule with name: Mule.mulelab:name=MuleContext
[23-01-13 11:07:03] [DEBUG] Registering configuration with name: Mule.mulelab:name=Configuration
[23-01-13 11:07:03] [DEBUG] Registering model with name: Mule.mulelab:type=Model,name="_muleSystemModel(seda)"
[23-01-13 11:07:03] [DEBUG] Registering service with name: Mule.mulelab:type=Flow,name="GmailImapsFetch"
[23-01-13 11:07:03] [DEBUG] org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'endpoint.imaps.your_username.gmail.com' is defined
[23-01-13 11:07:03] [ INFO] Attempting to register service with name: Mule.mulelab:type=Endpoint,service="GmailImapsFetch",connector=imapsConnector,name="endpoint.imaps.your_username.gmail.com"
[23-01-13 11:07:03] [ INFO] Registered Endpoint Service with name: Mule.mulelab:type=Endpoint,service="GmailImapsFetch",connector=imapsConnector,name="endpoint.imaps.your_username.gmail.com"
[23-01-13 11:07:03] [DEBUG] org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'connector.file.mule.default.1' is defined
[23-01-13 11:07:03] [DEBUG] Attempting to register service with name: Mule.mulelab:type=Connector,name="connector.file.mule.default.1"
[23-01-13 11:07:03] [ INFO] Registered Connector Service with name Mule.mulelab:type=Connector,name="connector.file.mule.default.1"
[23-01-13 11:07:03] [DEBUG] org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'imapsConnector.1' is defined
[23-01-13 11:07:03] [DEBUG] Attempting to register service with name: Mule.mulelab:type=Connector,name="imapsConnector.1"
[23-01-13 11:07:03] [ INFO] Registered Connector Service with name Mule.mulelab:type=Connector,name="imapsConnector.1"
[23-01-13 11:07:03] [DEBUG] Registering application statistics with name: Mule.mulelab:type=Application,name="application totals"
[23-01-13 11:07:03] [ INFO]
**********************************************************************
* Application: mulelab *
* OS encoding: UTF-8, Mule encoding: UTF-8 *
* *
* Agents Running: *
* JMX Agent *
**********************************************************************
[23-01-13 11:07:03] [ INFO]
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Started app 'mulelab' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I'm going to risk an answer :$
According to the IMAP's connector doc, it seems your config misses an attribute that is recommended with GMail:
moveToFolder
The remote folder to move mail to once it has been read.
It is recommended that 'deleteReadMessages' is set to false when this
is used. This is very useful when working with public email services
such as GMail where marking messages for deletion doesn't work.
Instead set the #moveToFolder=[GMail]/Trash.
Can you give it a try?
(Edited: [Gmail] has to be surrounded by brackets)