How to get the redis db that is set? [closed] - redis

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Hi is there a way to know the active db on redis?
As for now i am using $this->redis->select(7), so I basically select it manually. But is there a way where i can get the redis db that is set?

While there is no command to know which database the current connection is using, however, you can use CLIENT LIST which lists the current db in use for each client.
Ex:
127.0.0.1:6379> client list
id=6 addr=127.0.0.1:64502 fd=8 name= age=7 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=client
You can also use CLIENT SETNAME to set name of the client during connection and filter the CLIENT LIST output to that name.
127.0.0.1:6379> client setname hello
OK
127.0.0.1:6379> client list
id=6 addr=127.0.0.1:64502 fd=8 name=hello age=189 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=client
127.0.0.1:6379>
for more details see the redis document https://redis.io/commands/client-list

Related

Send notifications in telegram group [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 months ago.
Improve this question
How can I have my telegramBot send automatically messages in a group?
def handle_event(event):
#print(event)
global amount0In
global amount1Out
global amount1In
global amount0Out
amount0In = event['args']['amount0In']
amount1Out = event['args']['amount1Out']
amount1In = event['args']['amount1In']
amount0Out = event['args']['amount0Out']
if amount0In and amount1Out != 0:
print(f"Token Sold {amount0In /10**18}, and eth {amount1Out/10**18}")
buy()
else:
print(f"Token Bought {amount0Out /10**18}, and eth {amount1In/10**18}")
sell()
def buy(update,context):
buyMessage = f"Buy!!!!\n💴: {amount1In/10**18}\nToken Bought: {amount0Out /10**18} \n"
update.message.reply_text(buyMessage)
def sell(update, context):
sellMessage = f"Sell!!!!\n💴: {amount1In/10**18}\nToken Sold: {amount0Out /10**18} \n"
update.message.reply_text(sellMessage)
In case the IF statement is met I want to send a message to a telgram group, however I cant execute the update message this way, because I keep getting this error:
TypeError: buy() missing 2 required positional arguments: 'update' and 'context'
How can I fix this?
To send a message, all you need is an instance of telegram.Bot. Please have a look at the introduction to the API for more details.
The functions buy and sell look like callback functions for handler. Since you are apparently not using python-telegram-bots handler setup to handle the event, there is no sense in defining those functions to accept the update and context arguments.
Disclaimer: I'm currently the maintainer of python-telegram-bot.

Gatling: Executor not accepting task when polling

I have a gatling scenario in which I need to poll a specific endpoint for the duration of the test. However when polling the request it results in an and illegal state exception with the error executor not accepting task when polling.
I've had a look at the docs here, but Im not sure where I'm going wrong.
The snippet looks like this:
.exec(
poll()
.every(5)
.exec(http("getWingboard")
.get(WingboardEnpoints.Wingboard)
.headers(Config.header)
.check(status().`is`(200))
))
Errors look like this:
[gatling-1-2] DEBUG i.g.h.client.impl.DefaultHttpClient - Failed to connect to remoteAddress=xxxx/108.156.28.72:443 from localAddress=null
java.lang.IllegalStateException: executor not accepting a task
at io.netty.resolver.AddressResolverGroup.getResolver(AddressResolverGroup.java:61)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:194)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect(Bootstrap.java:162)
at io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:148)
at io.gatling.http.client.impl.DefaultHttpClient.openNewChannelRec(DefaultHttpClient.java:809)
at io.gatling.http.client.impl.DefaultHttpClient.lambda$openNewChannelRec$12(DefaultHttpClient.java:843)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.nio.AbstractNioChannel.doClose(AbstractNioChannel.java:502)
at io.netty.channel.socket.nio.NioSocketChannel.doClose(NioSocketChannel.java:342)
at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:754)
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:731)
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:620)
at io.netty.channel.nio.NioEventLoop.closeAll(NioEventLoop.java:772)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:529)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:833)
Im using Gatling gradle plugin v3.7.4 with Kotlin.
polling is a background task that only lasts as long as the virtual user is performing its main scenario. I suspect your users don't perform anything else than the polling.
Otherwise, please provide a full reproducer.

redis save causing cluster failover

Our DBA team (me) recently took over our infrastructure of Redis cluster with 8 primary/8 secondary servers along with 20-30 other standalone and Sentinel instances.
As a paranoid DBA, one thing I wanted to do was set up a scheduled time for backups using a simple script I wrote that does a save then moves the resulting rdb file to a new location. Since scheduling this script to run each night at 8pm, we have seen multiple nodes in the cluster fail over to the secondary, which I then have to move back manually. Now, there are also saves going on via the auto save settings so I may just abandon doing explicit backups altogether but I'm curious what's going on. Would doing bgsave instead solve the problem ?
October 13th 2021, 20:00:07.176 * DB saved on disk
October 13th 2021, 20:00:11.179 # Client id=374070 addr=10.100.0.151:48416 fd=25 name= age=14 idle=0 flags=P db=0 sub=15 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=6 omem=101109904 events=rw cmd=subscribe scheduled to be closed ASAP for overcoming of output buffer limits.
October 13th 2021, 20:00:16.181 * FAIL message received from 6c4a3e7c46d31d123dd5b55b662d323d51467109 about 866744a3afd0c83cbb91234313ff6ae3c4a9dfec
October 13th 2021, 20:00:18.182 * Clear FAIL state for node 866744a3afd0c83cbb91234313ff6ae3c4a9dfec: replica is reachable again.
October 13th 2021, 20:00:18.182 * FAIL message received from c4f0c574aaaa1309b6ec514a9645a5f6e3238e34 about 06de5562d113d2c36977cbf581b013487d19ee4e
October 13th 2021, 20:00:20.183 * Clear FAIL state for node 06de5562d113d2c36977cbf581b013487d19ee4e: replica is reachable again.
October 13th 2021, 20:00:20.183 * FAIL message received from b9f16fd70418a297796cb0a5ad6cad312aa6d784 about 8dd6dc1ff3105fbcd49e6c541c605f2b9d364952
October 13th 2021, 20:00:20.183 # Cluster state changed: fail
October 13th 2021, 20:00:21.183 # Cluster state changed: ok
It seems like this is likely the issue. I will try updating the client buffer settings and see what happens.
Redis replication and client-output-buffer-limit

How can I send JSON to consumer using RabbitMQ and Elixir? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am trying to send JSON to consumer with RabbitMQ? Is it possible and how? I am using Elixir as my programming language.
Follow this link to :
https://github.com/pma/amqp
Open issues to more information about send JSON.
iex(1)> {:ok, conn} = AMQP.Connection.open
{:ok, %AMQP.Connection{pid: #PID<0.364.0>}}
iex(2)> {:ok, chan} = AMQP.Channel.open(conn)
{:ok,
%AMQP.Channel{conn: %AMQP.Connection{pid: #PID<0.364.0>}, pid: #PID<0.376.0>}}
iex(3)> AMQP.Queue.declare chan, "test_queue"
{:ok, %{consumer_count: 0, message_count: 0, queue: "test_queue"}}
iex(4)> AMQP.Exchange.declare chan, "test_exchange"
:ok
iex(5)> AMQP.Queue.bind chan, "test_queue", "test_exchange"
:ok
iex(6)> AMQP.Basic.publish(chan, "test_exchange", "", Poison.encode(%{ name: "S" }), [content_type: "application/json"])
:ok

Redis "Client List" purpose and description

While executing the "Client List" i get the below result,whats the meaning of the each flag
Slave
addr=100.0.0.0:0000 fd=5 idle=3 flags=S db=0 sub=0 psub=0 qbuf=0 obl=0 oll=0 events=r cmd=sync
Master
addr=100.0.0.0:0000 fd=6 idle=0 flags=N db=0 sub=0 psub=0 qbuf=0 obl=0 oll=0 events=r cmd=client
With client list, Redis prints one row per connected client.
From the redis.h and networking.c files of Redis source code:
addr: address/port of the client
fd: file descriptor corresponding to the socket
idle: idle time of the connection in seconds
flags: client flags (see below)
db: current database ID
sub: number of channel subscriptions
psub: number of pattern matching subscriptions
qbuf: query buffer length (0 means no query pending)
obl: output buffer length
oll: output list length (replies are queued in this list when the buffer is full)
events: file descriptor events (see below)
cmd: last command played
The client flags can be a combination of:
O: the client is a slave in MONITOR mode
S: the client is a normal slave server
M: the client is a master
x: the client is in a MULTI/EXEC context
b: the client is waiting in a blocking operation
i: the client is waiting for a VM I/O
d: a watched keys has been modified - EXEC will fail
c: connection to be closed after writing entire reply
u: the client is unblocked
N: no specific flag set
The file descriptor events can be:
r: the client socket is readable (event loop)
w: the client socket is writable (event loop)
It is my interpretation, please take it with a grain of salt.