Redis Help- Replacing existing key with specific id with new value - redis

I am working with the redis-cli tool and querying my redis database.
I am storing my keys in redis in the following fashion
H:name:id
where name is a specific string related to the value data and id is the specific id related to the value data.
In this case, I am trying to input new value data into an existing key, where the id stays the same, but the name is changing in the key (new_name).
H:name:id -> H:new_name:id (where -> means to replace)
I am having trouble setting the new value to the existing key, when I change the name to new_name.
Instead redis is creating two different keys,
H:name:id
H:new_name:id
Any suggestions?
Thank you!

What redis commands are you trying? This should work:
rename H:name:id H:new_name:id

Related

Redis - Sort ID's based on Names in another Set

I am using multiple redis sets, hashes and Sorted sets for my use-case.
Suppose I am having a HASH set which stores ID and its corresponding object. (Project ID and its Content)
I have sets which contain list of ID's (List of ProjectIDs)
I have sorted sets which will sortBy DateTime fields and other Integer scores.
(Sort by DeadLine, Created etc and also by Project Name).
I also created a Sorted Set as my use-case needs Sort By Name (say Project Name). I created Project Name Sorted Set (ProjectName:ID as value and 0 as score).
So my requirement is that I need to sort my set (which contain ID's) based on Project Name in DESC or ASC.
How to achieve this??
Read the documentation about SORT - it should be something like SORT nameofzset BY nosort GET nameofhash->*. Even better, learn how to write Lua scripts and execute them with EVAL.

How to implement a key lookup for generated keys table in pentaho Kettle

I just started to use Pentaho Kettle for integration. Seems great so far, quite intuitive compared to Talend, which I was also investigating.
I am trying to migrate some customers without their keys. So I have their email addresses.
The customer may already exist in the database, so what I need to do is:
If the customer exists, add it's id to the imported field and continue.
But if the customer doesn't exist I need to get the next Hibernate key from the table Hibernate_Sequences and set it as the id.
But I don't want to always allocate a key, so I want to conditionally execute a step to allocate the next key.
So what I want to do, is in the flow execute the db procedure, which allocates the next key and returns it, only if there's no value in id from the "lookup id" step.
Is this possible?
Just posting my updated flow - so the answer was to use a filter rows component which splits the data on true/false. I really had trouble getting the id out of the database stored proc because of a bug, so I had to use decimal and then convert back to integer (which I also couldn't figure out how to do, so used a javascript component).
Yes it is. As per official documentation (i left only valuable information) "Lookup values are added as new fields onto the stream". So u need just to put step "Filter row" in Flow section and check for "id" which suppose to be added in "Existing Id Lookup" step.

Redis: How to give an unique id to hashes

I want to save my users information in hashes. I want to create a hash for each user in the application.
The hashes name is like this: "user:1001"
Now, i want to users id start from 1000 and increase by one.
how can i do this?
thanks
You can have a key called user:id, which will be a number that you will increment to obtain new ids. In your case, you can set the initial value to 1000:
SET user:id 1000
Then by using INCR you will be able to get a new id for your next user:
INCR user:id
Depending on the language you use, there may be already some tools to solve this problem. I would recommend you check Ohm, or one of the ports.

Redis - handling changes to data structures

I have been experimenting with Redis, and I really like the scalability that it brings to the table. However, I'm wondering how to handle changes to data structures for a system that's already in production.
For example, let me say that I am collecting information about a user, and I use the user_id as a key, and dumping the other data about the user as comma separated values.
user_id: name, email, etc.
Now, say after about 100,000 records, I realise that I needed to query by email - how would I now take a snapshot of the existing data and create a new index for it?
Using csv is not a great idea if you want to support changes. You need to use a serializer that handles missing/new values if everything is in one key, or you can use a redis hash, which gives you named subkeys. Either way you can add/remove fields with the only requirement being that your code knows what to do if it reads a record without the new value.
To allow lookup by email you need to add an index - basically a key (or list) for each email with the user id as the value. You will need to populate this index by getting all keys once, then making sure you update it when emails change.
You could iterate over all keys and store them with a different id, but that is probably more trouble than it is worth.
From my understanding of Redis, this would require something which Redis is not designed to do. You would have to loop though all your records (using keys *) and then change the order of the data and make a new key. I, personally, would recommend using a list instead of a comma separated string. In a list, you can reorder it from inside redis. A Redis List looks like the following:
"Colum" => [0] c.mcgaley#gmail.com
[1] password
[2] Something
I am building an app in which I encountered the same problem. I solved it by having a list for all the user's info, and then have a key with the user's email with a value of the user's id. So my database would something like this:
"Colum" => [0] c.mcgaley#gmail.com
[1] password
[2] Something
"c.mcgaley#gmail.com" => "Colum"
So I could query the ID or the Email and still get the information I needed.
Sorry that I was not able to directly answer your question. Just hope this helped.

How we can do a map operation from a file and a cassandra at a time?

I want to do a hadoop job by mapping inputs which is from a file and a cassandra at a time.
it it possible?
I know the ways to get file inputs files from a directory
or input datas from a cassandra.
but, I am not sure to a way to get each input from them is possible.
here is more hints to describe my situation.
data format is same.
a file like this:
key value1 value2 value3
...
a cassandra column structure like this:
key column | column name1 | column name 2 | column name 3
key value | column value1| column vlaue2 | column value 3
...
I need to extract a line from them and then do compare datas based on each key.
yes, I can get duplicate keys or new keys or deleted keys.
thanks.
You can do this in two jobs. First make a map only job to pull in your Cassandra data to HDFS.
Then use the "MultipleInputs" class "addInputPath" to specify the two locations you want your data from http://hadoop.apache.org/common/docs/r0.20.1/api/org/apache/hadoop/mapred/lib/MultipleInputs.html
Then in your map (of your second job) you can have logic based conditionality to what the input is based on the data you are seeing (like having the first column from cassandra say "cassandra" and recognize that in your map class of the second job) and clean it up (make it uniform) when it goes to the reducer.