I am QA using JMeter 5.3 with installed Redis Data Set (version 0.3).
In my tests I want to get data from Redis database, using Redis Data Set. Problem is that data are stored in the Hash data structure, but Redis Data Set doesn't support Hashes (only Lists or Sets).
My question is: is it different way to get data from Redis' Hash via JMeter or it's not possible for today? Do you know if there are any plans to add Hashes support to this plugin?
Thank you in advance for answers. Best regards.
You have 3 options:
State that it is not possible
Try to reach out to Redis plugin developers/maintainers via JMeter Plugins Support Forum and ask to implement this functionality asap
Use JSR223 Test Elements and Groovy language to read the data from Redis hash entries, it can be done relatively simple. Assuming the example given here
HSET myhash field1 "Hello"
you can read the value in any suitable JSR223 Test Element as:
def jedis = new redis.clients.jedis.Jedis('your_redis_host', your_redis_port)
def value = jedis.hget('myhash', 'field1')
Demo:
More information on Groovy scripting in JMeter: Apache Groovy - Why and How You Should Use It
Related
What should I use cache.put(key, value) or cache.query("INSERT INTO Table ")?
In case you properly configured queryable fields for your cache you can use both ways to insert data into the cache:
Key-Value API as shown here.
SqlFieldsQuery as described here.
Also, in case you would like to upload a large amount of data you can use Data Streamer, which automatically buffer the data and group it into batches for better performance.
Any. Or both.
One of the powers of Ignite is that it's truly multi-model - the same data is accessible via different interfaces. If you migrate a legacy app from an RDBMS, you'll use SQL. If you have something simple and don't care about the schema or queries, you'll use key-value.
In my experience, non-trivial systems based on Apache Ignite tend to use different kinds of access simultaneously. A perfectly normal example of an app:
Use key-value to insert the data from an upstream source
Use SQL to read and write data in batch processing and analytics
Use Compute with both SQL and key-value inside of the tasks to do colocated processing and fast analytics
I have large number of key-value pairs of different types to be stored in Redis cache. Currently I use a single Redis node. When my app server starts, it reads a lot of this data in bulk (using mget) to cache it in memory.
To scale up Redis further, I want to set up a cluster. I understand that in cluster mode, I cannot use mget or mset if keys are stored on different slots.
How can I distribute data into different nodes/slots and still be able to read/write in bulk?
It's handled in redis client library. You need to find if a library exists with this feature in the language of your choice. For example, if you are using golang - per docs redis-go-cluster provides this feature.
https://redis.io/topics/cluster-tutorial
redis-go-cluster is an implementation of Redis Cluster for the Go language using the Redigo library client as the base client. Implements MGET/MSET via result aggregation.
My use case for Google Cloud Dataflow is to use Redis as a cache during the pipeline, since the transformation to occur depends on some cached data. This would mean performing Redis GET commands. The documentation for the official, built-in Redis I/O transform mentions supporting a few methods:
read - "provides a source which returns a bounded PCollection containing key/value pairs as KV"
readAll - "can be used to request Redis server using input PCollection elements as key pattern (as String)"
It looks like the readAll does not correspond to a GET command though because the input PCollection would be used to filter the result of scanning a whole Redis source, so this isn't what I'm looking for.
I was wondering if there is something I'm missing when looking at the built-in I/O transform that would enable my use case, or whether there are alternatives like open source 3rd party I/O transforms that support it. Or, is this something that is fundamentally incompatible with Apache Beam?
You can use RedisConnectionConfiguration. It will give you a serializable connection that you can use in your transforms.
(a very similar question has been asked but has no answers)
I have a job processor (node.js) that takes in a couple of fields, runs a query and data manipulation on the result then sends the final result out to RabbitMQ queue. I have logging set up with Bunyan.
Now we'd like to log the results. A typical record in this log would look like:
{
"queryTime": 1460135319890,
"transID": "d5822210-8f87-4327-b43c-957b1ff96306",
"customerID": "AF67879",
"processingTime": 2345,
"queryStartDate": "1/1/2016",
"queryEndDate": "1/5/2016"
"numRecords": 20,
"docLength": 67868
}
The org has an existing ELK stack set up. I've got enough experience with Redis that it would be very simple to just push the data that I want out to the Redis instance in the ELK stack. Seems a lot easier than setting up logstash and messing around with its config.
I'd like to be able to visualize the customerID, processingTime and numRecords fields (to start). Is Kibana the right choice for this? Can I push data directly to it instead of messing around with LogStash?
Kibana doesn't have a datastore of its own and it relies on the datastore of Elasticsearch,i.e. Kibana uses the data stored in Elasticsearch to provide visualizations.
Hence you cannot directly push redis logs into Kibana by bypassing elasticsearch. For parsing logs from redis you need to use Logstash & Elasticsearch to push in your data.
Approach: Use Logstash and create a logstash configuration file wherein input will contain redis plugin. Output will contain elasticsearch plugin.
Kibana is an open source-tool which seems to be good in sync with what you want to achieve & also in sync with your organization setup as well.
We provide native support for node.js so you could push data directly, bypassing Logstash.
(disclosure - I'm an evangelist at Logz.io)
We have a huge set of data stored on hadoop cluster. We need to do some analysis to these data using apache spark and provide the result of this analysis to other applications via an API.
I have two ideas but I can not figure out which one is the recommended.
The first option is to make spark application(s) that make its analysis and store the result in another datastore (relation DB or even HDFS), then develop another application that reads the result of the analysis from the other datastore and provide an API for querying.
The second option is to make merge the two applications into one application. This way I deduce the need to another datastore but I this way the application will up running all the time.
What is the recommended way to go for in this situation? and if there is another options kindly list it.
It depends on How frequently the user going to hit the get api.as if client want real time result should go for in line api.else can use first aproach of storing result in another data storage.