Where to find a thorough list of node_redis commands? - redis

I'm using redis to store the userId as a key and the socketId as the value. What's more important is that the userId doesn't change, but the socketId constantly changes. So I want to edit the socketId value inside redis, but I'm not sure what node_redis command to use. I'm currently just editing by using .set(userId, mostRecentSocketId).
In addition, I haven't found a good node_redis API anywhere with a complete list of commands. I briefly looked at the redis-commands package, but it still doesn't seem to have a full list of complete commands.
Any help is appreciated; thanks in advance :)

The full list of Redis commands can be found at https://redis.io/commands. After finding a proper command it wouldn't be hard to find how is it proxied in binding ("api") you use.
Upd. To make it clear: you have Redis Server, its commands are listed at the doc I provided. Then you have redis-commands - it's a library for working with redis (I called it a "binding"). My point was that redis-commands may have not all the commands that redis-server can handle, and also the names of some commands can differ a bit. Some other bindings can offer slightly different set of commands. So it's better to examine the list of commands that Redis Server handles, and then select a binding that allowes calling that command (I guess all the bindings have set method)

Related

How to know when dataimporthandler for solr finished indexing?

Is it possibile to know where solr finished to index my data?
I work with solrcloud 4.9.0 and zookeeper for conf file manager.
I have the data.import file, but in it there is only where the indexing is STARTED not when it ended.
You can get the dataimporthandler status using:
<MY_SERVER>/solr/dataimport?command=status
Reading the status you can understand if the import is still running. A similar procedure (with a different url) is advised in "Solr in Action" book in order to check if a backup is still running.
Another option would involve the use of listeners as advised here.
I also use the /dataimport?command=status way to check if the job is done or not, and while it works, sometimes I get the impression it is a bit flaky.
There are listeners you can use: see here I would really like to use those, but of course you need to write java code and handle your jar in solr etc. So it is a bit of a PITA

Camel readlock strategy in cluster

We are trying to move to cluster with Apache Camel. So far we had it on one node and worked well.
One node:
I have readlock strategy set to 'changed' which keeps track of file changes with camelLock file and only when the file has finished downloading, it will be picked up for processing. But camel readlock strategy 'changed' is discouraged in clustering. According to the camel documentation 'idempotent' is recommended. This is what happens when I am testing with 5GB file.
Two nodes:
I have readlock strategy to 'idempotent' which distributes files to one of the nodes but camel starts processing the file even before the file has finished downloading.
Is there a way to stop camel from processing even before file has downloaded when readlock strategy is idempotent?
Even though both "readLock=changed" and "readLock=idempotent" cause the file-consumer to wait, they really address quite different use-cases: while "readLock=changed" guards against the file being incomplete (i.e. still being written by some producer/sender), "readLock=idempotent" guards against a file being read by two consumer routes. It's a bit confusing that they're addressed by the same option.
First, to address the "changed" scenario: can the sender be changed so that it writes the file in one directory and then, when it is done writing, it copies it into the directory being monitored by your file-consumer? If this is under your control, this is a good way of letting the OS handle things instead of trying to deal with it yourself. (This does not address the issue of the multiple readers.) Otherwise, I suggest you revert back to readLock=changed
Next, on multiple readers, one work around is to only have this route run on only one node of your cluster. Sometimes this might defeat the purpose of clustering, but it is quite possible that you're starting up additional nodes to help with some other routes, and you're fine with this particular route running on just one node. It's a bit of a hack to do something like this, because all nodes are no longer equal, but it is still an option to consider. Simplest would be to start one node with some environment property that flags it as the node that will handle file-reading... or some similar approach.
If you do want the route on multiple nodes, you can start by using the option "idempotent=true" but this is not good enough on its own. The option uses a repository, where it records what files have been read before, and the default repository is in-memory (i.e. each node has its own). So, the default implementation is helpful if the same file is actually being received more than once, and you wish to skip it. However, if you want it to work across nodes, you have to use a different repository.
One central repository could be a database. In that case use can use Camel's JDBC or JPA based repositories. Or, you could use something like Hazelcast. See here for your options: http://camel.apache.org/idempotent-consumer.html
You can use readLock=idempotent-changed.
idempotent-changed is for using an idempotentRepository and changed as the combined read-lock. This allows you to use read locks that supports clustering if the idempotent repository implementation supports that.
You can read more about these idempotent-changed options here: https://camel.apache.org/components/3.13.x/file-component.html
We also used readLock=changed in Docker clustered mode and worked perfectly since we used readLockMinAge for certain interval.

Where is the FLUSHALL command in http://try.redis.io/?

I am learning Redis right now and one of the first things I did was trying to interactively issue commands to an online Redis server: http://try.redis.io/
The FLUSHALL command is well documented here: http://redis.io/commands/flushall and is also referenced in this SO answer.
But when I try to issue it, it is simply not recognized:
My question: Why? Where has it disappeared? After all, the documentation says it is:
"Available since 1.0.0."
The web interface at try.redis.io, while executing against a real Redis database, offers only a subset of the actual commands. Because the database is shared by all users of the interface, some commands (FLUSHALL for example) are disabled in it.

Can I use an API such as chef to automatically create, name and set passwords to multiple servers?

I am new to this so forgive me for not understanding the lingo.
I have been using rackspace cloud control panel to build multiple virtual servers, i use them for maybe a couple of hours then i delete them. I need these servers to all have specific and unique names such as: "server1, server2, server3, etc." I also need them to have a specific password unlike the randomly generated password that is assigned by default.
I have been creating each individual server manually (based on an image that's set up) then I have to go back and reset the password andreboot all of them. Doing each one manually is a bit time consuming and I'm sure there is an easier way. Please help me figure this out.
I've been doing some searching but I haven't found anything too relevant to my problem on top of that I'm not too familiar with programming and such.
Basically what I'm looking to do is automatically create these servers with their appropriate names and passwords already built in from the start. I'm not sure if some sort of "API" is the answer, or if there's some sort of script that can be written, or both.
Any assistance is much appreciated.
thanks,
Chris

Schema migration in Redis

I have an application using redis. I used a key name user:<id> to store user info. Then locally I changed my app code to use key name user:<id>:data for that purpose.
I am scared by the fact that if I git push this new code to my production server things will break. And the reason for this is that since my production redis server would already have the keys will older key names.
So the only way I think is to stop my app, change all the older key names to new ones & then restart it. Do you have a better alternative? Thanks for help :)
Pushing new code to your production environment is always a scary business (that's why only the toughest survive in this profession ;)). I strongly recommend that before you change your production code and database, make sure that you test the workflow and its results locally.
Almost any update to the application requires its stoppage - even if only to replace the relevant files. This is even truer for any changes that involve a database exactly because of the reason you had mentioned.
Even if you can deploy your code changes without stopping the application per se (e.g. a PHP page), you will still want the database change to be done "atomically" - i.e. without any application requests intervening and possibly breaking. While some database can be taken offline for maintenance, even then you usually stop the app or else errors will be generated all over the place.
If that is indeed the case, you'll be stopping the app (or putting it into maintenance mode) regardless of the database change, so we take your question to actually mean: what's the fastest way to rename all/some keys in my database?
To answer that question, similarly to the pseudo-code suggested above, I suggest you use a Lua script such as the following and EVAL it once you stop the app:
for _,k in ipairs(redis.call('keys', 'user:*')) do
if k:sub(-5) ~= ':data' then
redis.call('rename', k, k .. ':data')
end
end
A few notes about this script that you should keep in mind:
Although the KEYS command is not safe to use in production, since you are doing maintenance it can be used safely. For all other use cases where you need to scan you keys, Redis' SCAN is much more advisable.
Since Lua scripts are "atomic", you can in theory run this script without stopping the app - as long as the script runs (which depends on the size of your dataset) the app's requests will be blocked. Put differently, this approach solves the concern of getting mixed key names (old & new). This, however, is probably not what you'd want to do in any case because a) your app may still error/timeout during that time but mainly because b) it will need to be able to handle both types of key names (i.e. running with old keys -> short/long pause -> running with new keys) making your code much more complex.
The if condition is not required if you're going to run the script only once and succeed.
Depending on the actual contents of your database, you may want to further filter out keys that should not be renamed.
To ensure compatibility, refrain from hardcoding / computationally generating key names - instead, they should be passed as arguments to the script.
You can run a migration script in your redis client language, using RENAME.
If you don't have any other control over the total of keys, you first issue a KEYS user:* to list all keys, then substring for getting the numeric id, then renaming.
You can issue all of this in a transaction.
So a little pseudocode and redis commands:
MULTI
KEYS user:*
For each key {
id = <Get id from key>
RENAME user:id user:id:data
}
EXEC
Got it ?