Mule Redis connector configuration is not consistent with its document - redis

Recently we decided to add a cache layer to our mule APIs and Redis came to the scope.
We are at Mule3.8.0 and Redis connector 4.0.0. and we met following issues while configuring:
How to separate our keys by Redis DB ? This is not mentioned in document and there is only a 'Default Partition Name' in the configuration seems close but whatever value we put there, seems no effect - it will always be db0 containing all the keys, hence we can't really have "dev", "qa" and "test" key sets in the same redis cluster
The Redis connector document has example as below
<redis:sorted-set-select-range-by-index config- ref="Redis_configuration" key="my_key" start="0" end="-1" />
however when we tried the samething it complains the 'end' value should be >= 0 hence not usable
How to configure a connection pool properly with Redis connector configuration? Not mentioned in document again. The only attribute is the 'Pool Config Reference' and I tried to put a spring bean ref to my own JedisPoolConfig there, seems no effect, and number of the connections remains the same no matter what value I put in that bean.
Thanks in advance If someone could help with these issues above
James

How to separate our keys by Redis DB ?
You can use Redis in cluster mode with sharing data (http://redis.io/topics/cluster-tutorial)
I don't think you need special configuration in Mule.
I think you mix Partition term in Mule and Partition term in Redis.
Regards,

Related

Issue with Dspring.cloud.bootstrap.enabled on enable cloud config with master 2.4 version

i upgraded my spring boot application to master pom 2.4 version and using cloud configs with the property enabled spring.cloud.bootstrap.enabled = true, I have db password encrypted in cloud properties so by the time i use the db properties i don't have my encryption framework available, so eventually my application failing with invalid username and password .."i have my own encryption service "
i am looking to see load the cloud config properties after i have my encryption service available, but the spring.cloud.bootstrap.enabled makes it load first on application startup, before i upgrade to master pom, i was not using spring.cloud.bootstrap.enabled so i didn't had any issue, with adding the property the order of loading changed, so i am running into an issue. Any help will be greatly appreciated. Thanks
so by the time i use the db properties i don't have my encryption
framework available
Use #DependsOn annotation in the bean that uses the db properties to depend on the encryption framework.

Where can I pass the properties file in Datastage when using Kafka Connector

There are some properties that I want to change as, for instance, security.protocol from SASL_PLAINTEXT to SASL_SSL. But the Kafka Connector in Datastage has very limited number of properties (host, use kerberos, principal name, keytab, topic name, consumer group, max poll records, max messages, reset policy timeout and classpath)
While reading this documentation the very first thing to do is to pass the JAAS configuration file. But my question is:
Where should I put this file? In the Datastage or Kafka side?
How can I point to this file?
This is what I tried:
Added a before-job subroutine in Datastage and use the following command:
export KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf"
Added the -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf to the Kafka Client Classpath in the Kafka Connector propertis in Datastage
But no matter what I do, everytime that I run the job the parameter security.protocol keeps unchanged:
Kafka_Connector_2,1: security.protocol = SASL_PLAINTEXT
Meaning that it's not reading the properties file.
Have you faced similar problem like these?
The Kafka Connector does have support for SASL SSL
Kafka Connector Properties
This was added in JR61201 for 11.5 and is available in 11.7.1.1
If you want to insert a JVM option such as
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
You should be able to leverage the CC_JVM_OPTIONS environment variable.

Redis DB with Jmeter Integration - Can not Connect

I have a question about connecting Redis DB to Jmeter, using jp#gc - Redis Data Set.
I created a test and want to see a value from Redis, the problem is that When I add the Redis DB component nothing happened, I press the Play button and nothing happened.
I think I am not configured the Redis as expected in jmeter.
I didnt create any variable just name a new variable called dsos.
I just want to pass the value from redis of dsos_13_173 to the parameter dsos
1. How can I see why the configuration not succeed?
2. What I am missing?
I am using jmeter 3.2, with plugin v0.2 and installed it from plug in manager, and the DB is remote I am using IP not localhost as in all examples
Regards
Redis Data Set config acts alike CSV Data Set Config, so given you want to use the data from Redis in i.e. HTTP Request sampler you just need to refer to it as ${dsos} where required.
You can also double check the associated JMeter Variable value using Debug Sampler
See JMeter’s Redis Data Set - An Introduction for comprehensive explanation, step-by-step instruction and example test plan.

splunk search head cluster joining indexer cluster

Just trying out splunk, have had an issue with integrating a search head cluster with an indexer cluster.
I have 3 machines in a search head cluster and 3 machines in an indexer cluster. These are all on CentOS7, no firewall installed, all machines are able to ping / view each others splunk instaces (ip:8000 / ip:8089).
When following https://docs.splunk.com/Documentation/Splunk/6.6.2/DistSearch/SHCandindexercluster specifically
splunk edit cluster-config -mode searchhead -master_uri 10.152.31.202:8089 -secret newsecret123
I get an error of
Could not contact master. Check that the master is up, the master_uri=10.152.31.202:8089 and secret are specified correctly
I have removed the https:// part from the IP's above as I couldn't post with them included.
I have set the pass4SymmKey to be the same on all servers.
thanks
Please check shclustering pass4symmkey in both search head cluster and in the master.
i suspect pass4symmkey issue.
You should check splunkd.log to see what the error is. I would recommend not setting up the Pass4symKey and verifying it works first, if not then you found your issue.
Also, you did not mention having an extra server acting as the cluster master. This should be an independent server from your indexers. You have one right?

Where are Ambari Macros set

In Knox config file in Ambari we have defined:
<url>http://{{namenode_host}}:{{namenode_http_port}}/webhdfs</url>
The problem is we have 2 namenodes, one active and one passive for high availability. Our active namenode01 failed so namenode02 became active.
This caused problems for a lot scripts as they were hardcoded to point to namenode01. So we used a command to failover namenode02 back to namenode01 using a terminal, not Ambari.
Now, the macro {{namenode_host}} is defined as namenode02 and not namenode01.
So, where is {{namenode_host}} defined?
Or, do we need to failover namenode01 to namenode02, then failover again to namenode01 using Ambari to update the macro?
If we need to failover the namenode using Ambari, I'm assuming we need to select the "Restart" option? There isn't a direct failover command.
See issue here:
https://issues.apache.org/jira/browse/AMBARI-12763
This was committed to Ambari to support HA mode for Knox. However if you're still looking for the location take a look at the file that's edited in the patch. That file is the place where the macros are set. You'll have to find it on your local machine though.
Should be something like params_linux.py