Apache Ignite Persistence and Services - ignite

Within my Ignite XML IgniteConfiguration, I have services defined that perform multiple functions including reading/writing JMS messages and interacting with the corresponding caches. I now want to add native persistence to the cached data, but once I turn on the persistence within any DataRegionConfiguration, my services no longer startup:
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
I'm sure there is a step I'm missing within the configuration. Any help or even better, example configurations is greatly appreciated.

Related

Pivotal GemFire 9.1.1: Replicated Region not created in cluster

I have a GemFire cluster with 2 Locators and 2 Servers in two unix machines. I am running a Spring Boot app which joins the GemFire cluster as a peer and tries to create Replicated Regions, loading the Regions using Spring Data GemFire. Once the Spring Boot app terminates, I am not seeing the Region/data in cluster.
Am I missing something here?
GemFire cluster is not using cache.xml or Spring XML to bootstrap the Regions. My idea is to create Regions through a standalone program and make it available in the cluster. SDGF version is 2.0.7.
gemfire-config.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:gfe="http://www.springframework.org/schema/gemfire"
xmlns:util="http://www.springframework.org/schema/util"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/gemfire http://www.springframework.org/schema/gemfire/spring-gemfire.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd">
<util:properties id="gemfireProperties">
<prop key="locators">unix_machine1[10334],unix_machine2[10334]</prop>
<prop key="mcast-port">0</prop>
</util:properties>
<bean id="autoSerializer" class="org.apache.geode.pdx.ReflectionBasedAutoSerializer">
<gfe:cache properties-ref="gemfireProperties" pdx-serializer-ref="autoSerializer" pdx-read-serialized="true"/>
<gfe:replicated-region id="Test" ignore-if-exists="true"/>
<gfe:replicated-region id="xyz" ignore-if-exists="true"/>
</beans>
Expectation is when the Spring Boot app terminates, Region should be created in the cluster.
The simplest approach here would be to use Cluster Configuration Service, and create regions via gfsh. See the below link for more information
https://docs.spring.io/spring-gemfire/docs/current/reference/html/#bootstrap:cache:advanced
See the section
Using Cluster-based Configuration
For more information on Cluster configuration please see the below link
http://gemfire.docs.pivotal.io/97/geode/configuring/cluster_config/gfsh_persist.html
The your client code probably be a simple gemfire client connecting to the gemfire cluster.
Your expectations are not correct. This is not a limitation with Spring per say, but is a side effect of how GemFire works.
If you were to configure a GemFire peer cache instance/member of the cluster using the GemFire API or pure cache.xml, then the cluster would not "remember" the configuration either.
When using the GemFire API, cache.xml or Spring config (either Spring XML or JavaConfig), the configuration is local to the member. Before GemFire's Cluster Configuration Service, administrators would need to distribute the configuration (e.g. cache.xml) across all the peer members that would form a cluster.
Then along came the Cluster Configuration Service, which enabled users to define their configuration using Gfsh. In Spring config, when configuring/bootstrapping a peer cache member of the cluster, you can enable the use of cluster configuration to configure the member, for example:
<gfe:cache use-cluster-configuration="true" ... />
As was pointed out here (bullet 4).
However, using Gfsh to configure each and every GemFire object (Regions, Indexes, DiskStores, etc) can be quite tedious, especially if you have a lot of Regions. Plus, not all developer want to use a shell tool. Some development teams want to version the config along with the app, which makes good sense.
Given you are using Spring Boot, you should have a look at Spring Boot for Pivotal GemFire, here.
Another way to start your cluster is to configure and bootstrap the members using Spring, rather than Gfsh. I have example of this here. You can, of course, run the Spring Boot app from the command-line using a Spring Boot FAT JAR.
Of course, some administrators/operators prevent development teams from bootstrapping the GemFire cluster in this manner and instead expect the teams to use the tools (e.g. Gfsh) provided by the database.
If this is your case, then it might be better to develop Spring Boot, GemFire ClientCache applications connecting to a standalone cluster that was started using Gfsh.
You can still do very minimal config, such as:
start locator --name=LocatorOne
start server --name=ServerOne
start server --name=ServerTwo
...
And then let your Spring Boot, client application drive the configuration (i.e. Regions, Indexes, etc) of the cluster using SDG's cluster configuration push feature.
There are many different options, so the choice is yours. You need to decide which is best for your needs.
Hope this helps.

How to start ActiveMQ WebConsole at the same server every time?

I have 3 virtual machines, each one of them running zookeeper and activemq.
Every time I start ActiveMQ, the ActiveMQ WebConsole starts in a different server. I wanto to start the ActiveMQ WebConsole at the same server everytime, so I don't need to figure out which of them is running the webconsole through the logs.
This is how my jetty.xml is configured:
<bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
<!-- the default port number for the web console -->
<property name="host" value="0.0.0.0"/>
<property name="port" value="8161"/>
</bean>
This is not possible as the embedded web server runs on the broker that is the master.
You can look at alternative web consoles that allows remote management, such as hawtio that can connect to remote servers. You can start hawtio on your local computer, or have it run on some other host, or start it separately on one of those 3 nodes etc.
http://hawt.io/
Running a local Hawt.io like Claus advices is a great option.
If you want to stick with the web console, you can actually have it connect to the current master broker.
You will need to start the console in non embedded mode and set (at least) three system properties. That is, typically this involves deploying the web-console .war inside a Tomcat or similar.
webconsole.jms.url=failover:(tcp://serverA:61616,tcp://serverB:61616)
webconsole.jmx.url=service:jmx:rmi:///jndi/rmi://serverA:1099/jmxrmi,service:jmx:rmi:///jndi/rmi://serverB:1099/jmxrmi
webconsole.type=properties
An old article that discuss using the embedded web consoles for failover as well. I don't know if it applies in all details to current versions.

Using 'jms.useAsyncSend=true' for some messages and verfying it in the consumer side

We are using the embedded activemq broker within Fuse ESB (7.1.0) with consumers co-located .
The producer client is deployed on a remote GlassFish server. It uses activeMQ resource adapter (5.6.0) along with spring jms template. The client publish messages to different queues. I want some of the messages (going to a given queue) to use jms.useAsyncSend=true where as the other messages should use the default. I see below options
1) I can't append the the option 'jms.useAsyncSend=true' in the resource adapter URL because that will enforce it for all messages.
2) The JNDI look-up for the connection factory is returning an instance of 'org.apache.activemq.ra.ActiveMQConnectionFactory'. I was actually expecting an instance of org.apache.activemq.ActiveMQConnectionFactory, which would have allowed me to use setUseAsyncSend() for the corresponding jmsTemplate. So I can't use this option as well.
3) I have multiple connection factories configured under the GlassFish connectors (one for each queue). I am trying to pass the property 'jms.useAsyncSend=true' as an additional property to a particular connection factory. I am expecting this to be used only for the connections created in that particular connection pool. Now, having done this I want to verify if it really worked.
Question 1) Is there a way where I can check in the consumer side if the property 'useAsyncSend' was set in an inbound message? This is to verify what I have done at producer side has actually worked. Note that I am using camel-context to route messages to the end consumers. Is there a way to check this within the came-context? is there a header or any such thing corresponding to this?
Question 2) Is there a better way to set 'useAsyncSend' in the producer side where one resource adapter is used for sending messages to different queues with different values for 'useAsyncSend'.
I understand that 'useAsyncSend' is an activeMQ specific configuration hence not available in jmstemplate interface
Appreciate any help on this.
thanks
I don't know Fuse ESB, but I have created different ActiveMQ connections with different properties (in my case it was the RedeliveryPolicy). You then direct your producers and consumers to use the appropriate connection.
So, if using Spring xml to define your connections, it would look something like this:
<!-- Connection factory with useAsyncSend=true -->
<bean id="asyncCF" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${broker.url}" />
<property name="useAsyncSend" value="true" />
</bean>
<!-- Connection factory with useAsyncSend=false -->
<bean id="syncCF" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${broker.url}" />
<property name="useAsyncSend" value="false" />
</bean>
Or specify the useAsyncSend as a parameter on the url value.
This might help you with Question 2.

Does c3p0 connection pooling ensures max pool size?

I've gone through several question, this is somewhat related but doesn't answer my question.
Does the c3p0 connection pooling maxPoolSize ensures that the number of connections at a certain time never exceeds this limit? What if the maxPoolSize=5 and 10 users start using the app exactly at the same time?
My app. configurations
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
<property name="driverClass"><value>${database.driverClassName}</value>/property>
<property name="jdbcUrl"><value>${database.url}</value></property>
<property name="user"><value>${database.username}</value></property>
<property name="password"><value>${database.password}</value></property>
<property name="initialPoolSize"><value>${database.initialPoolSize}</value>/property>
<property name="minPoolSize"><value>${database.minPoolSize}</value></property>
<property name="maxPoolSize"><value>${database.maxPoolSize}</value></property>
<property name="idleConnectionTestPeriod"><value>200</value></property>
<property name="acquireIncrement"><value>1</value></property>
<property name="maxStatements"><value>0</value></property>
<property name="numHelperThreads"><value>3</value></property>
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
<property name="dataSource" ref="dataSource"/>
</bean>
it is important to distinguish between DataSources and Connection pools.
maxPoolSize is enforced by c3p0 on a per-pool basis. but a single DataSource may own multiple Connection pools, as there is (and must be) a distinct pool for each set of authentication credentials. if only the default dataSource.getConnection() method is ever called, then maxPoolSize will be the maximum number of Connections that the pool acquires and manages. However, if Connections are acquired using dataSource.getConnection( user, password ), then the DataSource may hold up to (maxPoolSize * num_distinct_users) Connections.
to answer your specific question, if maxPoolSize is 5 and 10 clients hit a c3p0 DataSource simultaneously, no more than 5 of them will get Connections at first. the remaining clients will wait() until Connections are returned (or c3p0.checkoutTimeout has expired).
some caveats: c3p0 enforces maxPoolSize as described above. but there is no guarantee that, even if only a single per-auth pool is used, you won't occasionally see more than maxPoolSize Connections checked out. for example, c3p0 expires and destroys Connections asynchronously. as far as c3p0 is concerned, a Connection is gone once it has been made unavailable to clients and marked for destruction, not when it has actually been destroyed. so, it is possible that, if maxPoolSize is 5, that you'd occasionally observe 6 open Connections at the database. 5 connections would be active in the pool, while the 6th is in queue for destruction but not yet destroyed.
another circumstance where you might see unexpectedly many Connections open is if you modify Connection pool properties at runtime. in actual fact, the configuration of interior Connection pools is immutable. when you "change" a pool parameter at runtime, what actually happens is a new pool is started with the new configuration, and the old pool is put into a "wind-down" mode. Connections checked out of the old pool remain live and valid, but when they are checked in, they are destroyed. only when all old-pool Connections have been checked back in is the pool truly dead.
so, if you have a pool with maxPoolSize Connections checked out, and then alter a configuration parameter, you might transiently see a spike of up to (2 * maxPoolSize), if the new pool is hit with lots of traffic before Connections checked out of the old pool have been returned. in practice, this is very rarely an issue, as dynamic reconfiguration is not so common, and Connection checkouts ought to be and usually are very brief, so the old-pool Connections disappear rapidly. but it can happen!
i hope this helps.
ps acquireIncrement is best set something larger than 1. acquireIncrement of 1 means no Connections are prefetched ahead of demand, so whenever load increases some Thread will directly experience the latency of Connection acquisition.

Connecting to ActiveMQ broker using IBM client

In my test I run inmem ActiveMQ and then instantiate ActiveMQConnectionFactory and do whatever I want in order to test it. I used this because that seemed to be the easiest way to create integration test. I thought the switch from ActiveMQConnectionFactory to com.ibm.mq.jms.MQTopicConnectionFactory would be straightforward. But it apparently is not. What would be the mapping from this snippet
<bean id="activeMqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<constructor-arg value="vm://localhost:61616"/>
</bean>
to that one:
<bean id="ibmConnectionFactory" class="com.ibm.mq.jms.MQTopicConnectionFactory">
<property name="hostName" value="??"/>
<property name="port" value="??"/>
<property name="queueManager" value="??"/>
<property name="channel" value="??"/>
<property name="transportType" value="?"/>
</bean>
Would that be even possible without some kind of weird bridges Camel has?
This is not possible. The JMS specification covers the API and behavior but vendors are free to implement any wire format and communication protocols that they wish. WebSphere MQ uses it's own formats and protocols and Active MQ has its own formats and protocols.
Bridge applications function by reading messages into memory from one transport then writing that message to the other transport. Although this works at a basic level, the two transports have different destination namespaces and security realms so these interfaces tend to be hard-coded point-to-point routes. This is usually the best you can expect when mixing JMS transport providers.
You cannot do that, either with Camel or ActiveMQ JMS bridge, since you need a WebSphere MQ broker to connect to if you are using the IBM jms classes (e.g. com.ibm.mq.jms.MQTopicConnectionFactory )
I have, however, done extactly what you are trying to do in one project. The core idea is not to use the vendor specific classes, but the JMS interfaces in the code. Then you could store the configuration in JNDI (one for integration testing and one for production/acceptance testing).
If you do not want to use JNDI, you could perhaps use different spring context for each scenario, (that was my approach).
Let's take a simple example:
You two separate applicationContext.xml files (one embedded test and one production)
int-test:
<beans>
<import resource="jmsTest.xml"/>
<import resource="mainApplication.xml"/>
</beans>
Prod:
<beans>
<import resource="jmsProd.xml"/>
<import resource="mainApplication.xml"/>
</beans>
Then create your jms contexts:
jmsTest.xml:
<bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<constructor-arg value="vm://localhost:61616"/>
</bean>
jmsProd.xml
<bean id="connectionFactory" class="com.ibm.mq.jms.MQConnectionFactory">
<property name="hostName" value=".."/>
...
</bean>
mainApplication.xml (jms listeners etc), same
<bean id="myJmsHandlingClass" class="some.custom.Class"/>
<property name="connectionFactory" ref="connectionFactory"/>
</bean>
Then just make sure to follow the JMS specs and do nothing vendor specific, since both WMQ and AMQ has extensions to the jms standard that might be tempting to use.
One tricky part if you are doing topics, is that AMQ and WMQ use different topic separators by default.
In WMQ: root/subtopic/#
In AMQ: root.subtopic.*
So you might want to inject destinations via spring as well, it's simliar to the connection factory above.