How to use Redis as back end for mosquitto ACL (JPmens plugin is used)? - authentication

I am working on Mosquitto and plan to use Redis as the back end to handle both username/password pair authentication and ACL. I am using JPmens' authentication plugin to do this. The authentication works well, but I can't make the ACL work. Redis uses unique keys and the usernames (keys in my cases) are used in username/password pairs for authentication purposes. I have tried to mix user name,password and topics together in sets/list, but none of them work.
the mosquitto conf:
auth_plugin /etc/mosquitto/auth-plug.so
auth_opt_backends redis
auth_opt_redis_host 127.0.0.1
auth_opt_redis_port 6379
auth_opt_redis_userquery GET %s
auth_opt_redis_aclquery GET %s-%s
Following name/password pairs are working fine for the authentication
SET user1 PBKDF2$sha256$901$Qh18ysY4wstXoHhk$g8d2aDzbz3rYztvJiO3dsV698jzECxSg
SET user2 PBKDF2$sha256$901$R74X2ae3MufMS20M$CAbXZFDmXJN7Cc28Dm/Z97OfM8Tz1JHn
...
Following settings won't work for the ACL: (a/b... as topics)
sadd user22 PBKDF2$sha256$901$Qh18ysY4wstXoHhk$g8d2aDzbz3rYztvJiO3dsV698jzECxSg a/b c/d
rpush user33 PBKDF2$sha256$901$q5/N74O6Iaf/e8Cg$dEA3tZSi/sJeXKAkX39Gd3agy2WY96gE e/f
What's the correct way to do so?
In the Redis API, aclrequery shows that:
Single stepping until exit from function be_redis_aclcheck, which has no line number information.
redisCommand (c=0x6537d0, format=0x6561c0 "GET user1-t/c") at hiredis.c:1345
1345 void *redisCommand(redisContext *c, const char *format, ...) {
(gdb) bt
0 redisCommand (c=0x6537d0, format=0x6561c0 "GET my-t/c") at hiredis.c:1345
1 0x00007ffff5e61376 in be_redis_aclcheck () from /etc/mosquitto/auth-plug.so
2 0x00007ffff5e5c351 in mosquitto_auth_acl_check ()
from /etc/mosquitto/auth-plug.so
Here, user1 is the user name and t/c is the topic. GET user1-t/c seems to tell me a string type is expected in the Redis database. Can anyone give me an example of how to get this to work?
Thanks

I have figured out how it works. If MQTT broker only allow client user1 to pub and sub "a/b" and "c/d" topics, the correct ACL data in Redis for the JPmens plugin will be:
user1-a/b 2
user1-c/d 2
"user1-a/c" is the key and 2 is the value.

It's not preferred, if Redis goes down for any reason your entire system will be down also.
It will be a SPF (single point of failure) in your architecture.

Related

Geth with clique block sealing without unlock account

Hello I have a local blockchain, Geth client, 2 nodes and clique proof of authority algorithm.
I start geth with this command:
geth --datadir node2/ --syncmode 'full' --port 30312
--rpc --rpcport 8546 --rpccorsdomain "*"
--ipcpath geth.ipc --rpcapi 'personal,db,eth,net,web3,txpool,miner'
--bootnodes 'enode://702efed8e606...ad041b4371a91989#127.0.0.1:30310'
--networkid 2456 --gasprice '1' --mine
--unlock '0x46004DEAfddb60d11cA04501df8C52aE4679Be8f' --password password.txt
but because of unlock now everyone can transfer ether from this account to some other account
like so:
const Web3 = require("web3");
var web3Client = new Web3(new Web3.providers.HttpProvider("http://localhost:8546"));
await web3Client.eth.sendTransaction({
from: "0x46004DEAfddb60d11cA04501df8C52aE4679Be8f",
to: "0xE77e5634A46153e1cfCa02350cf212BdbC18fbC6",
value: 23
});
but if I remove --unlock from geth command I can no longer seal blocks
WARN [06-01|14:44:52] Block sealing failed err="authentication needed: password or unlock"
is it possible to seal blocks in some other way so I won't have to unlock the account anymore?
Unfortunately, geth needs access to the private key to sign transactions, so you have to have it unlocked, otherwise it can't sign.
What you can do, is have this node signing, and get rid of
--ipcpath geth.ipc --rpcapi 'personal,db,eth,net,web3,txpool,miner'
instead, give the rpc to another node without an unlocked account.
Use this other node for all your interactions, and allow the first one to sign.
Cheers;
Evan

Wireshark filter for packets which initiates FIN (connection close) sequence from the server-side

Apache (ec2) --- Client (ELB)
| |
|-------[1.]FIN------->|
| |
|<-----[2.]FIN+ACK-----|
| |
|---------ACK--------->|
| |
With Wireshark I'd like to extract only the packet "[1.]FIN" described above figure which is emitted by server's 80 port and which "initiates" FIN sequence.
I've tried a filter:
tcp.flags.fin && tcp.srcport==80
but the filter also extracts the extra "[2.]FIN+ACK" packets.
How can I filter out only [1.] packet considering "FIN" sequence initiator?
Background:
I'm struggling to get rid of 504 errors with AWS ELB and ec2 (apache), where "FIN - FIN/ACK - ACK" sequence is initiated by the backend apache-side. In such environment FIN sequence initiated by ELB is ideal as AWS official sais: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html
According to https://aws.amazon.com/jp/premiumsupport/knowledge-center/504-error-classic/, I've tried changing replace MPM (event -> worker) and disabling TCP_DEFER_ACCEPT, which slightly reduced 504 errors. However the situation is not much improved.
The point I think will be to find the cause which makes apache initiate active-close sequence, thus I'm firstly trying to extract initiating FIN packet from apache among at most 512 parallel connections between ELB and EC2 (apache).
tcp.flags.fin == 1 && tcp.flags.ack == 0
A filter such as tcp.flags.fin only checks for the presence of the parameter. To find certain values of a parameter, a comparison is needed. That is why filters like "tcp" work to find TCP packets.
The filter match for FIN does not exclude other flags being set or not set, so a comparison is needed for each flag that should be part of the filter.

julia on PBS cluster: what to give to addprocs()?

I'm trying to setup a cluster across machines on a PBS managed cluster. I'm perfectly able to compute within one node by saying julia -p 12 (after having reserved one node with 12 CPUs).
I understand that to use several machines, I have to add them to the master process with addprocs. I was able to do that on a different cluster (SGE). on this one here something is going wrong.
You can see everything I'm doing, including submit scripts etc, on this branch of a github repo.
to get a list of machines, I parse the PBS_NODEFILE, which for the case of a submit script with option
#PBS -l nodes=2:ppn=12 # give me 2 nodes with 12 processors each
looks like something like this:
red0004
red0004
...
red0004
red0347
...
red0347
I parse this file with bind_pe_procs() in sge.jl in the repo and give a vector of machine names to addprocs. When I submit this I get this error which I put up a gist with the resulting SSH error. I don't know what it means.
has this to do with a system setting, ie do i have to talk to the sys admin about SSH between machines? What are the right questions to ask?
I am unsure about what exactly I have to give to addprocs(). I don't want to add the master process (I don't want worker 1 SSHing into itself?), so I exclude ENV["HOST"] = node001 from my list. but what about all processors with the same name node002? do i list all of those
machines = [ "red0347" for i=1:12]
or just once
machines = ["red0347"]
in addprocs(machines)
thanks!

RabbitMQ: separate consumer-producer buddles while using just one queue server

we're going to use rabbitmq in our project, but facing a problem that, we want to debug on our dev machines, so the response message have to be send to machine which originally send the request message out. How we're going to achive that, is there an existing solution in spring-rabbitmq framework?
We have considered several solutions. such as declare a set of queues for each machine, the queue name prefix by machine name. Is that feasible?
Define set of queues (debug queue A-Z) and bind them to headers exchange with attributes x-match=any, from=[A-Z], to=[A-Z] respectively to . Then bind headers exchange to you main working exchange (one or more) to receive all messages you interested in, so when your consumer publish response it will be duplicated to your debug exchange and then routed to appropriate queue.
[sender X] [ worker ] [consumer on queue X]
| ^ |
[request] | [response from=X, to=X] [duped request from=X|
\ | | [duplicated response from=X, to=X]
\ [request from=X] | ^
v | V |
[working topic exchange] -------> [debug headers exchange]
/ | \ / | \
{bindings by routing key mask} {bindings by any headers from=[A-Z], to=[A-Z]}
/ | \ / | \
[working queue 1] ... [working queue N] [debug queue A] ... [debug queue Z]
To bind request and response messages you can use applicationId and correlationId message attributes.
Note, that both request and response messages will be duplicated to debug queues. You may also specify separate queue for request and response messages by binding queues to match only specific headers, something like x-match=all, from=[A-Z] or x-match=all, to=[A-Z] and publish response and request messages with only that headers (only from or only to), but it is up to you.
The pros:
easy to implement
requires minimal code changes
easy to turn on/off
may be safely run in production environment
Cons:
use more resource power from RabbitMQ side
Alternatively, you can utilize RPC pattern somehow if you debugging process requires realtime response receiving. But this will block publisher until response processed, which may differ from real-world app usage and break business logic.
Pros:
step-by-step debugging process
Cons:
hard to implement
may require a lot of code changes
break business logic
hard to enable/disable
not production environment safe
p.s.: sorry for ascii graph

How do I delete everything in Redis?

I want to delete all keys. I want everything wiped out and give me a blank database.
Is there a way to do this in Redis client?
With redis-cli:
FLUSHDB – Deletes all keys from the connection's current database.
FLUSHALL – Deletes all keys from all databases.
For example, in your shell:
redis-cli flushall
Heads up that FLUSHALL may be overkill. FLUSHDB is the one to flush a database only. FLUSHALL will wipe out the entire server. As in every database on the server. Since the question was about flushing a database I think this is an important enough distinction to merit a separate answer.
Answers so far are absolutely correct; they delete all keys.
However, if you also want to delete all Lua scripts from the Redis instance, you should follow it by:
SCRIPT FLUSH
The OP asks two questions; this completes the second question (everything wiped).
FLUSHALL
Remove all keys from all databases
FLUSHDB
Remove all keys from the current database
SCRIPT FLUSH
Remove all the scripts from the script cache.
If you're using the redis-rb gem then you can simply call:
your_redis_client.flushdb
you can use flushall in your terminal
redis-cli> flushall
This method worked for me - delete everything of current connected Database on your Jedis cluster.
public static void resetRedis() {
jedisCluster = RedisManager.getJedis(); // your JedisCluster instance
for (JedisPool pool : jedisCluster.getClusterNodes().values()) {
try (Jedis jedis = pool.getResource()) {
jedis.flushAll();
}
catch (Exception ex){
System.out.println(ex.getMessage());
}
}
}
One more option from my side:
In our production and pre-production databases there are thousands of keys. Time to time we need to delete some keys (by some mask), modify by some criteria etc. Of course, there is no way to do it manually from CLI, especially having sharding (512 logical dbs in each physical).
For this purpose I write java client tool that does all this work. In case of keys deletion the utility can be very simple, only one class there:
public class DataCleaner {
public static void main(String args[]) {
String keyPattern = args[0];
String host = args[1];
int port = Integer.valueOf(args[2]);
int dbIndex = Integer.valueOf(args[3]);
Jedis jedis = new Jedis(host, port);
int deletedKeysNumber = 0;
if(dbIndex >= 0){
deletedKeysNumber += deleteDataFromDB(jedis, keyPattern, dbIndex);
} else {
int dbSize = Integer.valueOf(jedis.configGet("databases").get(1));
for(int i = 0; i < dbSize; i++){
deletedKeysNumber += deleteDataFromDB(jedis, keyPattern, i);
}
}
if(deletedKeysNumber == 0) {
System.out.println("There is no keys with key pattern: " + keyPattern + " was found in database with host: " + host);
}
}
private static int deleteDataFromDB(Jedis jedis, String keyPattern, int dbIndex) {
jedis.select(dbIndex);
Set<String> keys = jedis.keys(keyPattern);
for(String key : keys){
jedis.del(key);
System.out.println("The key: " + key + " has been deleted from database index: " + dbIndex);
}
return keys.size();
}
}
Writing such kind of tools I find very easy and spend no more then 5-10 min.
Use FLUSHALL ASYNC if using (Redis 4.0.0 or greater) else FLUSHALL.
https://redis.io/commands/flushall
Note: Everything before executing FLUSHALL ASYNC will be evicted. The changes made during executing FLUSHALL ASYNC will remain unaffected.
FLUSHALL Deletes all the Keys of All exisiting databases .
FOr Redis version > 4.0 , FLUSHALL ASYNC is supported which runs in a background thread wjthout blocking the server
https://redis.io/commands/flushall
FLUSHDB - Deletes all the keys in the selected Database .
https://redis.io/commands/flushdb
The time complexity to perform the operations will be O(N) where N being the number of keys in the database.
The Response from the redis will be a simple string "OK"
Open redis-cli and type:
FLUSHALL
You can use FLUSHALL which will delete all keys from your every database.
Where as FLUSHDB will delete all keys from our current database.
Stop Redis instance.
Delete RDB file.
Start Redis instance.
redis-cli -h <host> -p <port> flushall
It will remove all data from client connected(with host and port)
After you start the Redis-server using:service redis-server start --port 8000 or redis-server.
Use redis-cli -p 8000 to connect to the server as a client in a different terminal.
You can use either
FLUSHDB - Delete all the keys of the currently selected DB. This command never fails. The time-complexity for this operation is O(N), N being the number of keys in the database.
FLUSHALL - Delete all the keys of all the existing databases, not just the currently selected one. This command never fails. The time-complexity for this operation is O(N), N being the number of keys in all existing databases.
Check the documentation for ASYNC option for both.
If you are using Redis through its python interface, use these two functions for the same functionality:
def flushall(self):
"Delete all keys in all databases on the current host"
return self.execute_command('FLUSHALL')
and
def flushdb(self):
"Delete all keys in the current database"
return self.execute_command('FLUSHDB')
i think sometimes stop the redis-server and delete rdb,aof files。
make sure there’s no data can be reloading.
then start the redis-server,now it's new and empty.
You can use FLUSHDB
e.g
List databases:
127.0.0.1:6379> info keyspace
# Keyspace
List keys
127.0.0.1:6379> keys *
(empty list or set)
Add one value to a key
127.0.0.1:6379> lpush key1 1
(integer) 1
127.0.0.1:6379> keys *
1) "key1"
127.0.0.1:6379> info keyspace
# Keyspace
db0:keys=1,expires=0,avg_ttl=0
Create other key with two values
127.0.0.1:6379> lpush key2 1
(integer) 1
127.0.0.1:6379> lpush key2 2
(integer) 2
127.0.0.1:6379> keys *
1) "key1"
2) "key2"
127.0.0.1:6379> info keyspace
# Keyspace
db0:keys=2,expires=0,avg_ttl=0
List all values in key2
127.0.0.1:6379> lrange key2 0 -1
1) "2"
2) "1"
Do FLUSHDB
127.0.0.1:6379> flushdb
OK
List keys and databases
127.0.0.1:6379> keys *
(empty list or set)
127.0.0.1:6379> info keyspace
# Keyspace
Your questions seems to be about deleting entire keys in a database. In this case you should try:
Connect to redis. You can use the command redis-cli (if running on port 6379), else you will have to specify the port number also.
Select your database (command select {Index})
Execute the command flushdb
If you want to flush keys in all databases, then you should try flushall.
you can use following approach in python
def redis_clear_cache(self):
try:
redis_keys = self.redis_client.keys('*')
except Exception as e:
# print('redis_client.keys() raised exception => ' + str(e))
return 1
try:
if len(redis_keys) != 0:
self.redis_client.delete(*redis_keys)
except Exception as e:
# print('redis_client.delete() raised exception => ' + str(e))
return 1
# print("cleared cache")
return 0
This works for me: redis-cli KEYS \* | xargs --max-procs=16 -L 100 redis-cli DEL
It list all Keys in redis, then pass using xargs to redis-cli DEL, using max 100 Keys per command, but running 16 command at time, very fast and useful when there is not FLUSHDB or FLUSHALL due to security reasons, for example when using Redis from Bitnami in Docker or Kubernetes. Also, it doesn't require any additional programming language and it just one line.
If you want to clear redis in windows:
find redis-cli in
C:\Program Files\Redis
and run FLUSHALL command.
Its better if you can have RDM (Redis Desktop Manager).
You can connect to your redis server by creating a new connection in RDM.
Once its connected you can check the live data, also you can play around with any redis command.
Opening a cli in RDM.
1) Right click on the connection you will see a console option, just click on it a new console window will open at the bottom of RDM.
Coming back to your question FLUSHALL is the command, you can simply type FLUSHALL in the redis cli.
Moreover if you want to know about any redis command and its proper usage, go to link below.
https://redis.io/commands.
There are different approaches. If you want to do this from remote, issue flushall to that instance, through command line tool redis-cli or whatever tools i.e. telnet, a programming language SDK. Or just log in that server, kill the process, delete its dump.rdb file and appendonly.aof(backup them before deletion).
If you are using Java then from the documentation, you can use any one of them based on your use case.
/**
* Remove all keys from all databases.
*
* #return String simple-string-reply
*/
String flushall();
/**
* Remove all keys asynchronously from all databases.
*
* #return String simple-string-reply
*/
String flushallAsync();
/**
* Remove all keys from the current database.
*
* #return String simple-string-reply
*/
String flushdb();
/**
* Remove all keys asynchronously from the current database.
*
* #return String simple-string-reply
*/
String flushdbAsync();
Code:
RedisAdvancedClusterCommands syncCommands = // get sync() or async() commands
syncCommands.flushdb();
Read more: https://github.com/lettuce-io/lettuce-core/wiki/Redis-Cluster
For anyone wondering how to do this in C# it's the same as the answer provided for Python for this same question.
I am using StackExchange.Redis v2.2.88 for a dot net (core) 5 project. I only need to clear my keys for integration testing and I have no purpose to do this in production.
I checked what is available in intellisense and I don't see a stock way to do this with the existing API. I imagine this is intentional and by design. Luckily the API does expose an Execute method.
I tested this by doing the following:
Opened up a command window. I am using docker, so I did it through docker.
Type in redis-cli which starts the CLI
Type in KEYS * and it shows me all of my keys so I can verify they exist before and after executing the following code below:
//Don't abuse this, use with caution
var cache = ConnectionMultiplexer.Connect(
new ConfigurationOptions
{
EndPoints = { "localhost:6379" }
});
var db = _cache.GetDatabase();
db.Execute("flushdb");
Type in KEYS * again and view that it's empty.
Hope this helps anyone looking for it.