Is there any way to know if redis database is on use? - redis

Is there any way to know using node-redis if a redis DB is being used by another process? Something like this:
Process A connect to db0.
Process B check if db0 is used
Process B connect to db1 because db0 is used.
Thanks for the help!

CLIENT LIST command should give you this info.

Related

Can redis supports queries like sql join and group by while replacing sql DB with Redis?

I have a project in which i need to replace the SQL DB with REDIS. Its a job scheduling system. There are tables like JobInfo, TaskInfo, Result, BatchInfo etc.
What is the best way to map DB tables in REDIS server key value pair?
There are join and group by kind of queries used in the project.
What is the best way to replace the sql server with the redis server? Also does redis provides a way with which i can query the data like i can in join and group by queries?
Redis is basically a key-value store (a bit more sophisticated than just a simple one, but yet - a key-value db). the value may be a document that follows some schema, but Redis isn't optimized to search for those documents and query them like other Document Databases or like relational database such as SQL Server.
I dont know why you're trying to migrate from SQL Server to Redis, but you need to re-check yourself if that's the right design choice. If you need fixed schema and join operations - it may suggest that Redis isn't the right solution.
If all you're looking for is caching, you can cache in the application layer, or use other solution to integrate your Redis and SQL Server (I wrote simple open-source project that does that: http://redisql.ishahar.net ).
Hope this helps.
I guess its not possible though you can see below post to implement JOIN like feature in Redis.
Can we take join in Redis?
Please refer below post as well:
Redis database table desing like sql?

Rename or copy a whole Redis database to another one?

I have an application that get all its data from a Redis database (DB1), which is updated every hour by an external process. During this update, all the data in Redis is replaced.
To avoid having any errors on the main application when updating, I thought about having the updater process write to a secondary Redis database (DB2) and after finishing, switch this database with the one that the application is using.
I didn't find a way to rename or copy a whole Redis database, so the only way I can think of is to erase all keys from DB1 and than use MOVE to save all new keys from DB2 in DB1.
Is there a better way to accomplish this?
Why not simply have DB2 SLAVEOF DB1, poll it with INFO and checking for
master_sync_in_progress: 0?
When you're about to perform your updates to DB1 then SLAVEOF NO ONE on DB2 (break the replication). Perform your updates on DB1 while clients access the static (old) data on DB2; then reslave when the updates are complete on DB1.

Running same job on multiple instances in SQL Server

How to run the same job on multiple instances without including it in all the instances i.e including it in only one instance?
Is there any way to do that?
Yes there is a way to do this. First you have to create master server (http://msdn.microsoft.com/en-us/library/ms175104.aspx) and then create a master job (http://msdn.microsoft.com/en-us/library/ms190662.aspx) that will be downloaded to defined target servers.
Regards,
Dean Savović

how to apply restriction to mysql users on usage of mysqldump

Hi i have requirement to create a database from which no data goes outside not in csv format or as a dump file.
If mysql crashes the data should be gone but no recovery should exist..
It may looks like stupid idea to implement but clients requirement is like that only..
So any one help me how to restrict mysqlbump client program and INTO OUTFILE commands for all users except root user. Other users will have select insert update delete and etc database leve privileges but not global level privileges..
Can any one help me on this ?
I'm not sure what are you looking for, But if you have ssh access to server, I propose to use filesystem backup or some useful tools like innobackupex instead of mysqldump.
For big data mysqldump isn't good solution.
u must restrict every new mysql user to Select_priv, Lock_tables_priv, file,alter, crete tmp table , execute,create table . so the user cant do any things. even in mysqldump , they cant get export. use mysql.user table. or use some tools like navicat .
You can't. From the perspective of the MySQL server, mysqldump is just another client that runs a lot of SELECT statements. What it happens to do with them (generate a dump file) is not something that the server can control.
For what it's worth, this sounds like an incredibly stupid idea, as it means that there will be no way to restore your client's data from backups if a disaster occurs (e.g, a hard drive fails, MySQL crashes and corrupts tables, someone accidentally deletes a bunch of data, etc). The smartest thing to do here will be to tell the client that you can't do it - anything else is setting yourself up for failure.

using rabbitMQ to transfer MySQL records

I want to use rabbitMQ to store new transactions(records) that are happening in the local database into the remote similar database.
Any suggestions, ideas on this?
Can you share the reason of wanting to use RabbitMQ as the store and forward mechanism to the remote database. I believe most databases already have mechanisms to replicate/backup to a remote database installation. These are tuned to the database and tend to work very reliably. RabbitMQ will not help you shard database operations across multiple remove database instances either.
Why RabbitMQ?