Redis DB with Jmeter Integration - Can not Connect - redis

I have a question about connecting Redis DB to Jmeter, using jp#gc - Redis Data Set.
I created a test and want to see a value from Redis, the problem is that When I add the Redis DB component nothing happened, I press the Play button and nothing happened.
I think I am not configured the Redis as expected in jmeter.
I didnt create any variable just name a new variable called dsos.
I just want to pass the value from redis of dsos_13_173 to the parameter dsos
1. How can I see why the configuration not succeed?
2. What I am missing?
I am using jmeter 3.2, with plugin v0.2 and installed it from plug in manager, and the DB is remote I am using IP not localhost as in all examples
Regards

Redis Data Set config acts alike CSV Data Set Config, so given you want to use the data from Redis in i.e. HTTP Request sampler you just need to refer to it as ${dsos} where required.
You can also double check the associated JMeter Variable value using Debug Sampler
See JMeter’s Redis Data Set - An Introduction for comprehensive explanation, step-by-step instruction and example test plan.

Related

adding basic authentication to Solr 8.6.1

We are having some difficulty when adding basic authentication to Solr 8.6.1. We are following this document, and we have created security.json file, which is successful (since Solr instance will ask userId and password when it starts.) Our difficulty happens when trying to enable the global authentication settings: we did pass the -Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactory system property,and we also set the -Dbasicauth=username:password property as follows:
// the following is the last time of our Solr Dockerfile:
CMD ["solr-foreground", "-Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactory", "-Dbasicauth=username:secret"]
However, the calls to retrieve data from Solr all come back with Error 401 require authentication.
Could someone please kindly let us know what did we miss?
You'll have to set the correct options on the client - not on the server. This is a setting that affects how the client that connects to Solr authenticates.
So when running your application, give the parameter to the java command (or configure it to be the default parameter through ant/maven/gradle/etc.
Setting it on the docker container will not do anything useful.

NiFi - Call Redis Commands in Groovy Script

I am trying to connect to my Redis instance from a groovy script (ExecuteGroovyScript) and execute arbitrary commands such as LPUSH. I currently have RedisConnectionPoolService enabled and working fine for caching processors.
Is there any way to achieve this? Any examples are appreciated.
EDIT:
I got to the point where I can call a command but for some reason it fails, here is the code and error
service = context.getControllerServiceLookup().getControllerService("2b841623-35ed-1e1a-0a77-46087267939d")
service.getConnection().withCloseable { redis ->
redis.listCommands().lPush("key".getBytes(), "1".getBytes())
}
If you have a RedisConnectionPoolService called service and call service.getConnection(), you will have a Spring Redis RedisConnection instance, so you can check their API for the kinds of calls you can make.
For LPUSH specifically you can call service.getConnection().listCommands().lpush()

How to set a specific port for single-user Jupyterhub server REST API calls?

I have setup Spark SQL on Jypterhub using Apache Toree SQL kernel. I wrote a Python function to update Spark configuration options in the kernel.json file for my team to change configuration based on their queries and cluster configuration. But I have to shutdown the running notebook and re-open or restart the kernel after running Python function. In this way, I'm forcing the Toree kernel to read the JSON file to pick up the new configuration.
I thought of implementing this shutdown and restart of kernel in a programmatic way. I got to know about the Jupyterhub REST API documentation and am able implement it by invoking related API's. But the problem is, the single user server API port is set randomly by the Spawner object of Jupyterhub and it keeps changing every time I spin up a cluster. I want this to be fixed before launching the Jupyterhub service.
Here is a solution I tried based on Jupyterhub docs:
sudo echo "c.Spawner.port = 35289
c.Spawner.ip = '127.0.0.1'" >> /etc/jupyterhub/jupyterhub_config.py
But this did not work as the port was again set by the Spawner randomly. I think there is a way to fix this. Any help on this would be greatly appreciated. Thanks

Mule Redis connector configuration is not consistent with its document

Recently we decided to add a cache layer to our mule APIs and Redis came to the scope.
We are at Mule3.8.0 and Redis connector 4.0.0. and we met following issues while configuring:
How to separate our keys by Redis DB ? This is not mentioned in document and there is only a 'Default Partition Name' in the configuration seems close but whatever value we put there, seems no effect - it will always be db0 containing all the keys, hence we can't really have "dev", "qa" and "test" key sets in the same redis cluster
The Redis connector document has example as below
<redis:sorted-set-select-range-by-index config- ref="Redis_configuration" key="my_key" start="0" end="-1" />
however when we tried the samething it complains the 'end' value should be >= 0 hence not usable
How to configure a connection pool properly with Redis connector configuration? Not mentioned in document again. The only attribute is the 'Pool Config Reference' and I tried to put a spring bean ref to my own JedisPoolConfig there, seems no effect, and number of the connections remains the same no matter what value I put in that bean.
Thanks in advance If someone could help with these issues above
James
How to separate our keys by Redis DB ?
You can use Redis in cluster mode with sharing data (http://redis.io/topics/cluster-tutorial)
I don't think you need special configuration in Mule.
I think you mix Partition term in Mule and Partition term in Redis.
Regards,

How to submit code to a remote Spark cluster from IntelliJ IDEA

I have two clusters, one in local virtual machine another in remote cloud. Both clusters in Standalone mode.
My Environment:
Scala: 2.10.4
Spark: 1.5.1
JDK: 1.8.40
OS: CentOS Linux release 7.1.1503 (Core)
The local cluster:
Spark Master: spark://local1:7077
The remote cluster:
Spark Master: spark://remote1:7077
I want to finish this:
Write codes(just simple word-count) in IntelliJ IDEA locally(on my laptp), and set the Spark Master URL to spark://local1:7077 and spark://remote1:7077, then run my codes in IntelliJ IDEA. That is, I don't want to use spark-submit to submit a job.
But I got some problem:
When I use the local cluster, everything goes well. Run codes in IntelliJ IDEA or use spark-submit can submit job to cluster and can finish the job.
But When I use the remote cluster, I got a warning log:
TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
It is sufficient resources not sufficient memory!
And this log keep printing, no further actions. Both spark-submit and run codes in IntelliJ IDEA result the same.
I want to know:
Is it possible to submit codes from IntelliJ IDEA to remote cluster?
If it's OK, does it need configuration?
What are the possible reasons that can cause my problem?
How can I handle this problem?
Thanks a lot!
Update
There is a similar question here, but I think my scene is different. When I run my codes in IntelliJ IDEA, and set Spark Master to local virtual machine cluster, it works. But I got Initial job has not accepted any resources;... warning instead.
I want to know whether the security policy or fireworks can cause this?
Submitting code programatically (e.g. via SparkSubmit) is quite tricky. At the least there is a variety of environment settings and considerations -handled by the spark-submit script - that are quite difficult to replicate within a scala program. I am still uncertain of how to achieve it: and there have been a number of long running threads within the spark developer community on the topic.
My answer here is about a portion of your post: specifically the
TaskSchedulerImpl: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have
sufficient resources
The reason is typically there were a mismatch on the requested memory and/or number of cores from your job versus what were available on the cluster. Possibly when submitting from IJ the
$SPARK_HOME/conf/spark-defaults.conf
were not properly matching the parameters required for your task on the existing cluster. You may need to update:
spark.driver.memory 4g
spark.executor.memory 8g
spark.executor.cores 8
You can check the spark ui on port 8080 to verify that the parameters you requested are actually available on the cluster.