Apache Geode Redis Adapter can not PERSISTENT - gemfire

I want to create GeodeRedisServer what the regiontype is REPLICATE_PERSISTENT. According to the documentation:
gfsh> start server --name=server1 --redis-bind-address=localhost \
--redis-port=11211 --J=-Dgemfireredis.regiontype=PARTITION_PERSISTENT
I use the command, but it is not successful. An error is :
gfsh>start server --name=server2 --server-port=40405 --redis-bind-address=192.168.16.36 --redis-port=6372 --J=-Dgemfireredis.regiontype=REPLICATE_PERSISTENT
Starting a Geode Server in /home/apache-geode-1.10.0/my_geode/server2...
The Cache Server process terminated unexpectedly with exit status 1. Please refer to the log file in /home/apache-geode-1.10.0/my_geode/server2 for full details.
Exception in thread "main" java.lang.NullPointerException
at org.apache.geode.internal.cache.LocalRegion.findDiskStore(LocalRegion.java:7436)
at org.apache.geode.internal.cache.LocalRegion.<init>(LocalRegion.java:595)
at org.apache.geode.internal.cache.LocalRegion.<init>(LocalRegion.java:541)
...
the log i can see:
[info 2019/12/10 15:55:11.097 CST <main> tid=0x1] _monitoringRegion_192.168.16.36<v1>41001 is done getting image from 192.168.16.36(locator1:62278:locator)<ec><v0>:41000. isDeltaGII is false
[info 2019/12/10 15:55:11.097 CST <main> tid=0x1] Initialization of region _monitoringRegion_192.168.16.36<v1>41001 completed
[info 2019/12/10 15:55:11.379 CST <main> tid=0x1] Initialized cache service org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl
[info 2019/12/10 15:55:11.396 CST <main> tid=0x1] Initialized cache service org.apache.geode.cache.lucene.internal.LuceneServiceImpl
[info 2019/12/10 15:55:11.397 CST <main> tid=0x1] Starting GeodeRedisServer on bind address 192.168.16.36 on port 6379
[error 2019/12/10 15:55:11.402 CST <main> tid=0x1] java.lang.NullPointerException
[info 2019/12/10 15:55:11.405 CST <Distributed system shutdown hook> tid=0x15] VM is exiting - shutting down distributed system
but when i change the the regiontype is REPLICATE, success.
And I want to make my redis service highly available, now i can create redis adapter service, but once it hangs, it is not available. Is there any way to create a redis adapter cluster? I haven't found an available method in the official documentation, I hope someone can help me, thank you very much。
As a side note, the version I tested was 1.9.0.

Unfortunately this appears to be broken and I don't think there is a good workaround. I've opened a ticket for this: https://issues.apache.org/jira/browse/GEODE-7721

I think Jens is right that the REPLICATE_PERSISTENT policy seems to be broken.
You also asked "Is there any way to create a redis adapter cluster?" If you use the REPLICATE or PARTITION_REDUNDANT data policies, I think it should work. If you start additional geode servers they will make redundant copies of your data.
Do be aware that this adapter is still experimental and in development.

Related

Kafka-s3-connect killed instantly after start

I want to connect aws-Kafka with s3 using confluence connector on my ec2 server. I try to configure everything like in tutorials. When I run connect-standalone or connect-distributed, at first everything goes well, I don't get any errors in the logs but after information about connection starting, my connector died instantly without any information. Has anybody got same problem?
config/connect-standalone.properties
bootstrap.servers=msk-connection-string
plugin.path=/home/ubuntu/connectors/confluentinc-kafka-connect-s3
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
offset.storage.file.filename=/tmp/connect.offsets
connector.properties
connector.class=io.confluent.connect.s3.S3SinkConnector
format.class=io.confluent.connect.s3.format.bytearray.ByteArrayFormat
flush.size=1
topics=SomeTopic
s3.bucket.name=bucket-name-here
s3.region=us-west-2
s3.part.size=5242880
aws.access.key.id=****
aws.secret.access.key=****
behavior.on.null.values=ignore
storage.class=io.confluent.connect.s3.storage.S3Storage
topics.dir=../topics
store.url=http://bucket-name.s3-website-Region.amazonaws.com
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
logs:
[2021-08-20 06:32:35,954] INFO Kafka version: 2.7.0 (org.apache.kafka.common.utils.AppInfoParser:119)
[2021-08-20 06:32:35,954] INFO Kafka commitId: 448719dc99a19793 (org.apache.kafka.common.utils.AppInfoParser:120)
[2021-08-20 06:32:35,954] INFO Kafka startTimeMs: 1629441155953 (org.apache.kafka.common.utils.AppInfoParser:121)
Killed
Please help!
MSK requires TLS connection
When adding few lines with ssl configuration to config/connect-standalone.properties
producer.security.protocol=SSL
consumer.security.protocol=SSL
security.protocol=SSL
ssl.protocol=TLS
ssl.truststore.location=/your/path/to/truststore/kafka.client.truststore.jks
It starts working properly!

activemq failover using multiple instances in master slave mode on same linux machine

I have setup ActiveMQ mulitple instances to achieve failover in master slave mode in windows.
While setting up the same i just created 3 instances under bin folder without changing any port and started all 3 instances one by one. First instance became master and remaining were in slave mode until I stopped master instance.
Now I am trying to achieve the same in Linux environment. First instance starts successfully but when I start second instance in a different window it throws below error:
ERROR | Failed to start Apache ActiveMQ ([instance2, ID:132vm6-57227-1478597606120-0:1], java.io.IOException: Transport Connector could not be registered in JMX: java.io.IOException: Failed to bind to server socket: tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600 due to: java.net.BindException: Address already in use)
INFO | Apache ActiveMQ 5.14.0 (instance2, ID:132vm6-57227-1478597606120-0:1) is shutting down
INFO | Connector openwire stopped
INFO | Connector amqp stopped
INFO | Connector stomp stopped
INFO | Connector mqtt stopped
INFO | Connector ws stopped
INFO | PListStore:[/opt/apache-activemq-5.14.0/bin/instance2/data/instance2/tmp_storage] stopped
INFO | Stopping async queue tasks
INFO | Stopping async topic tasks
INFO | Stopped KahaDB
INFO | Apache ActiveMQ 5.14.0 (instance2, ID:132vm6-57227-1478597606120-0:1) uptime 0.585 seconds
INFO | Apache ActiveMQ 5.14.0 (instance2, ID:132vm6-57227-1478597606120-0:1) is shutdown
INFO | Closing org.apache.activemq.xbean.XBeanBrokerFactory$1#4233871a: startup date [Tue Nov 08 15:03:24 IST 2016]; root of context hierarchy
WARN | Exception thrown from LifecycleProcessor on context close
java.lang.IllegalStateException: LifecycleProcessor not initialized - call 'refresh' before invoking lifecycle methods via the context: org.apache.activemq.xbean.XBeanBrokerFactory$1#4233871a: startup date [Tue Nov 08 15:03:24 IST 2016]; root of context hierarchy
at org.springframework.context.support.AbstractApplicationContext.getLifecycleProcessor(AbstractApplicationContext.java:357)[spring-context-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:884)[spring-context-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.close(AbstractApplicationContext.java:843)[spring-context-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.apache.activemq.hooks.SpringContextHook.run(SpringContextHook.java:30)[activemq-spring-5.14.0.jar:5.14.0]
at org.apache.activemq.broker.BrokerService.stop(BrokerService.java:875)[activemq-broker-5.14.0.jar:5.14.0]
at org.apache.activemq.xbean.XBeanBrokerService.stop(XBeanBrokerService.java:122)[activemq-spring-5.14.0.jar:5.14.0]
at org.apache.activemq.broker.BrokerService.start(BrokerService.java:629)[activemq-broker-5.14.0.jar:5.14.0]
at org.apache.activemq.xbean.XBeanBrokerService.afterPropertiesSet(XBeanBrokerService.java:73)[activemq-spring-5.14.0.jar:5.14.0]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[:1.7.0_65]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)[:1.7.0_65]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.7.0_65]
at java.lang.reflect.Method.invoke(Method.java:606)[:1.7.0_65]
I am using ActiveMQ 5.14 version.
If anybody has encountered a similar issue, kindly provide your inputs.
To get multiple instances of ActiveMQ running on the same machine, you need to change the ports that they try to open. There are (at least) 3 ports that need to be changed:
The transportConnector ports that accept messaging traffic. These are defined in theactivemq.xml file. Typically you only need the openwire one - this is 61616 by default; I usually change this in the other ActiveMQ instances to 61626, 61636 etc. You can usually comment out the others if you don't intend to use them.
The Jetty HTTP port. This is defined in the jetty.xml file. The default is 8161, set the next ones to 8162, 8163 etc.
The JMX port. This one's a bit tricky, as you need to stick a piece of config into the activemq.xml to explicitly define it as follows:
<managementContext>
<managementContext createConnector="true" connectorPort="1099"/>
</managementContext>
You can then change this to 1199, 1299 on the other instances. Hope this helps.

ActiveMQ Master/Slave on Weblogic - vm transport issue

I am trying to configure ActiveMQ master/slave setup on a single WebLogic machine. The problem is when I start Managed Server1 it successfully connects to vm transport and everything works perfectly, but when I start Managed Server2 I am receiving the following errors in broker logs
INFO 2016-September-27 10:08:00,227 ActiveMQEndpointWorker:124 - Connection attempt already in progress, ignoring connection exception
INFO 2016-September-27 10:08:01,161 TransportConnector:260 - Connector vm://localhost started
INFO 2016-September-27 10:08:30,228 TransportConnector:291 - Connector vm://localhost stopped
INFO 2016-September-27 10:08:30,229 TransportConnector:260 - Connector vm://localhost started
WARN 2016-September-27 10:08:30,228 ActiveMQManagedConnection:385 - Connection failed: javax.jms.JMSException: peer (vm://localhost#61) stopped.
WARN 2016-September-27 10:08:30,231 TransportConnection:823 - Failed to add Connection ID:ndl-wls-300.mydomain.com-52251-1474966937425-65:1 due to java.lang.NullPointerException
ERROR 2016-September-27 10:08:30,233 ActiveMQEndpointWorker:183 - Failed to connect to broker [vm://localhost?create=false]: java.lang.NullPointerException
javax.jms.JMSException: java.lang.NullPointerException
Please help, I am stuck with this.
I still don't see the reason for the slave within the same VM. I suggest you reach out to an ActiveMQ expert consultant to validate your architecture.
However, I think I can help you move a little bit closer to this issue:
There is a fundamental miss understanding here.. the vm url is broken down like this:
vm://${brokerName}?option=value,etc
The first time you create vm://localhost?create=true.. you have created a broker
The second time you reference vm://localhost?create=false.. you have created a client connection to the first broker.
To get two brokers, you'd need two different vm://${brokerName}?create=true

GemFire 8.2.0 ClientCacheFactory addPoolLocator, how to set list of locators on client

I have this code
ClientCacheFactory clientCacheFactory = new ClientCacheFactory();
clientCacheFactory.set("cache-xml-file", "client-cache.xml");
clientCacheFactory.set("mcast-port", "0");
List<String> locators = Arrays.asList(peersIp.split(","));
for (String locator : locators) {
clientCacheFactory.addPoolLocator(locator, 11001);
}
I am adding a list of locator ips to clientCacheFactory
In the clients logs i see
[info 2016/02/09 13:14:15.440 UTC tid=0x1] Running in local mode since mcast-port was 0 and locators was empty.
[info 2016/02/09 13:14:15.694 UTC tid=0x1] Pool DEFAULT started with multiuser-authentication=false
[info 2016/02/09 13:14:15.725 UTC tid=0x16] Updating membership port. Port changed from 0 to 49,879.
*
my client-cache.xml
<!DOCTYPE client-cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 6.6//EN" "http://www.gemstone.com/dtd/cache6_6.dtd">
<client-cache>
<region name="exampleRegion" refid="PROXY"/>
</client-cache>
I want to use the ClientCacheFactory, to define a list of locators for the client to connect to.
The method 'addPoolLocator' looks just what i want but, the logs say no.
The logs are misleading you in this case. It looks like your pool has been created, so you should be good to go. You can verify by connecting gfsh to the locator and issuing a describe region command.
$ cd $GEMFIRE_HOME
$ ./bin/gfsh
gfsh>connect --locator=localhost[10334]
gfsh>describe region --name=/MyRegion
gfsh>list members
gfsh>list clients
Please refer to gfsh documentation for more commands, or just type help in gfsh.
The logs actually mean you are not connected as a peer to the GemFire distributed system. The locators property refers to the peer-to-peer locators and not the one within the pool. I can see how this is misleading, I will file an issue against Apache Geode.

Issue with Open Shift Origin Mongo DB service

I have installed OpenShift Origin V3 on aws ec2(Fedora19) using oo-install.The set up is One Broker +One Node.
I was making some modifications to the security groups to make it more restrictive -
and it ended up some issues in the mongo service.
1.service mongod does not start up and the status shows failed.
The /var/log/mongodb/mongodb.log says
Thu Mar 6 11:24:08.189 [initandlisten] ERROR: listen(): bind() failed errno:99 Cannot assign requested address for socket: :27017
Thu Mar 6 11:24:08.189 [initandlisten] now exiting
Running oo-accept-broker -v says
FAIL: error logging into mongo db: MOPED: Retrying connection to primary for replica set :27017">]>: MOPED: Retrying connection to primary for replica set :27017">]>/MOPED: --username Retrying, exit code: 1
Any pointers on how to resolve this will be greatly appreciated.
Thanks
Shabna
I would try rolling back your changes to the security groups first and then make the changes one by one and see which one causes the issue, then post that to stack and see if anyone can comment on the specific change that is affecting mongodb.