Redis pipe mode throw client reached max query buffer length? - redis

I'm trying to import about 3GB neo4j cypher script data into redis, using the redis pipe mode.
docker image:
redislabs/redisgraph:2.8.17
redis version:
Redis server v=6.2.6 sha=00000000:0 malloc=jemalloc-5.1.0 bits=64 build=e15697f1a083b6bc
redisGraph module doc: redis-graph-doc
I refer to the following documents and use python scripts to generate data that conforms to the redis protocol
redis-bulk-loading-doc
data example:
cat -A redis_command.txt | head
*3^M$
$11^M$
GRAPH.QUERY^M$
$9^M$
knowledge^M$
$280^M$
MERGE (n:xxxx) SET n.name="xxx", n.nebula_id="xxx", n.concept_id="xxx", n.create_time="xxx", n.data_source="xxx", n.entity_tag="xxx", n.version="xxx"^M$
^M$
*3^M$
$11^M$
GRAPH.QUERY^M$
$9^M$
knowledge^M$
$270^M$
MERGE (n:xxxx) SET n.name="xxx", n.nebula_id="xxx", n.concept_id="xxx", n.create_time="xxx", n.data_source="xxx", n.entity_tag="xxx", n.version="xxx"^M$
After that I use the following command for data import
cat redis_command.txt | redis-cli --pipe
But the command runs for about 1 minute, and it will automatically exit.
Only this information is printed in the log file of redis
Closing client that reached max query buffer length: id=10084 addr=127.0.0.1:34990 laddr=127.0.0.1:6379 fd=11 name= age=47 idle=0 flags=b db=0 sub=0 psub=0 multi=-1 qbuf=1073756036 qbuf-free=268421234 argv-mem=293 obl=0 oll=0 omem=0 tot-mem=1342198077 events=r cmd=graph.QUERY user=default redir=-1 (qbuf initial bytes: "\r\n*3\r\n$11\r\nGRAPH.QUERY\r\n$9\r\nknowledge\r\n$274\r\nMERGE (n:`xxxxxxxxx")
I don't know where the problem is, how should I solve this problem?

Related

Sending command to GPS device using gpsd python library

I use the gpsd python library in order to read and parse the NMEA string recieved from the gps device. I would like to send some command to gps in order to fine tune the measurement rate, report rate and so on.
It is possible using the gpsd library or I must send the command with in other way?
According to 'gpsd' manual:
To send a binary control string to a specified device, write to the
control socket a '&', followed by the device name, followed by '=',
followed by the control string in paired hex digits.
So if you have gpsd service running as gpsd -F /var/run/gpsd.sock you can use the following code to send commands to the gps device:
import socket
import sys
# Create a socket
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
# Connect the socket to the port where the GPSD is listening
gpsd_address = '/var/run/gpsd.sock'
sock.connect(gpsd_address)
# #BSSL 0x01<CR><LF> - Set NMEA Output Sentence (GGA only)
# cmd = '#BSSL 0x01' + '\r\n'
# #RST<CR><LF> - RST - Reset
cmd = '#RST' + '\r\n'
message = '&/dev/ttyUSB1='
cmd_hex = cmd.encode('utf-8').hex()
print ('cmd_hex {}'.format(cmd_hex))
# cmd_hex 405253540d0a
message += cmd_hex
bin_message = message.encode('utf-8')
print ("bin message {}".format(bin_message))
# bin message b'&/dev/ttyUSB1=405253540d0a'
sock.sendall(bin_message)
data = sock.recv(16)
print ('received {}'.format(data))
# received b'OK\n'
sock.close()
In my case I am sending #RST command followed by CR, LF symbols.

"OOM command not allowed when used memory > 'maxmemory'" for an Amazon ElastiCache Redis

I'm getting "OOM command not allowed when used memory > 'maxmemory'" error from time to time when trying to insert into an Elasticache redis node.
I went from a self-managed redis instance (maxmemory = 12Go, maxmemory-policy = allkeys-lru) to an Elasticache redis one (r5.large, i.e. maxmemory = 14 Go, maxmemory-policy = allkeys-lru).
However, after the migration of keys I'm getting "OOM command not allowed when used memory > 'maxmemory'" error from time to time that I don't manage to understand.
I've checked what they recommend here: https://aws.amazon.com/premiumsupport/knowledge-center/oom-command-not-allowed-redis/ to solve the problem but so far:
I have a TTL on all keys
I'm already in allkeys-lru
When I look at the node freeable memory I have about 7Go available
Here is the output when INFO memory
# Memory
used_memory:10526693040
used_memory_human:9.80G
used_memory_rss:11520012288
used_memory_rss_human:10.73G
used_memory_peak:10560011952
used_memory_peak_human:9.83G
used_memory_peak_perc:99.68%
used_memory_overhead:201133315
used_memory_startup:4203584
used_memory_dataset:10325559725
used_memory_dataset_perc:98.13%
allocator_allocated:10527575720
allocator_active:11510194176
allocator_resident:11667750912
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:10527885773
maxmemory_human:9.80G
maxmemory_policy:allkeys-lru
allocator_frag_ratio:1.09
allocator_frag_bytes:982618456
allocator_rss_ratio:1.01
allocator_rss_bytes:157556736
rss_overhead_ratio:0.99
rss_overhead_bytes:-147738624
mem_fragmentation_ratio:1.09
mem_fragmentation_bytes:993361528
mem_not_counted_for_evict:0
mem_replication_backlog:1048576
mem_clients_slaves:0
mem_clients_normal:153411
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
If you have any clue for how to solve this.
Thanks!

Increase key days every 24 hours

I could increase key "days" by command
$ redis-cli
127.0.0.1:6379> set days 1
OK
127.0.0.1:6379> incr days
(integer) 2
127.0.0.1:6379> get days
"2"
How could I augment it automatically every 24 hours?
First you need to add celery conf, read doc. Somthing like this:
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
app = Celery('allunac', broker='redis://localhost:6379/0')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
I choose redis for broker, because you work with it in your project, but you can choose another broker like RabbitMQ, read doc.
Because you need a task at regular intervals you need celery beat too, read doc.
Add your task:
from datetime import timedelta
from django.core.cache import cache
from celery.decorators import periodic_task
#periodic_task(run_every=timedelta(seconds=30))
def redis_add():
if not cache.get('days'):
cache.set('days', 1) # set initial value
else:
cache.incr('days', 2) # increase by 2
Run celery with beat:
celery -A proj worker -l info -B
CELERY LOG
REDIS

Total Number of connections for each connected Redis Service

How to check which service is consuming more resources on Redis.
Or which service has the highest number of connections on Redis?
You can type the command "CLIENT LIST", You'll see these like :
id=39 addr=127.0.0.1:34706 fd=7 name= age=141156 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 events=r cmd=client
id=78 addr=127.0.0.1:58014 fd=5 name= age=63779 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=llen
id=80 addr=127.0.0.1:36826 fd=6 name= age=46776 idle=1685 flags=N db=1 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=del
The most useful keys for your answer is "age" && "idle" , "age" means the total duration of the connection in seconds and "idle" means idle time of this connection. so (age - idle) / age relects this client uses server's cpu more than other client which the value is smaller , but not very precisely
Other command also can give you some suggestions, like "INFO" and "MONITOR" .
INFO gives you a statistics information about redis server , such as the memory usage, the command processed, the cpu usage, the connected clients and so on , you can refer this to get more.
"MONITOR" gives you a real time display which says that what happens now, what command is being executed, who sent this command. Maybe you can compute the every client resource using by the MONITOR output.
e.g.
for every command, you first parse it and using a cost to add the client cost sum. In time consuming computing, SET is O(1) and Lrange is O(N). But this is also difficult to do this very precisely. But you can log this using this command like :
redis-cli monitor > redis-command.log
you can use this log to do some analytics. but notes that MONITOR command will reduce your redis server throughput, check this
If you run the "client list" command against your Redis instance, you should be able to see the entire list of clients connected to your redis instance along with their IP addresses. You can then see which clients (services) have the highest number of connections to your Redis instance.
Do simply
info clients
output
connected_clients:xxx
client_longest_output_list:xxx
client_biggest_input_buf:x
blocked_clients:xx
To get which client is having higher number of connection we can use the below shell script
#!/bin/bash
# Get the list of clients with their connection count
clients=$(redis-cli client list | awk '{print $2}' | sort | uniq -c | sort -rn)
# Print the client with the highest number of connections
highest=$(echo "$clients" | head -n 1)
echo "Client with the highest number of connections: $highest"
This will provide the top first client which has more no of clients I hope it will help!!

How to craft specific packets on the host of Mininet to generate massive Packet-In messages

I am wondering that how to generate massive packet-in messages to the controller to test the response time of SDN controller in the environment of Mininet.
Can you give me some advice on it?
You could use iperf to send packets, like this:
$ iperf -c -F
You could specify the amount of time:
$IPERF_TIME (-t, --time)
The time in seconds to transmit for. Iperf normally works by repeatedly sending an array of len bytes for time seconds. Default is 10 seconds. See also the -l and -n options.
Here is a nice reference for iperf: https://iperf.fr/.
If you would like to use Scapy, try this:
from scapy.all import IP, TCP, send
data = "University of Network blah blah"
a = IP(dst="129.132.2.21")/TCP()/data
send(a)