What is causing INFO replication and UNSUBSRIBE events in Redis - redis

When running redis-cli monitor I'm seeing the following events over and over. What is causing these events? And can they be suppressed?
Windows Redis version 2.8.2400
1503693326.380836 [0 127.0.0.1:21771] "UNSUBSCRIBE" " \xcc\xf0(\xe4\x00\x01B\x83l\x8a\xc0\xaa\x80\xd2)"
1503693326.662796 [0 127.0.0.1:19523] "UNSUBSCRIBE" "\xd2\xbc\x95Tw\xa2KE\x9e\x80:\xd1'YM\x91"
1503693326.662823 [0 127.0.0.1:19522] "INFO" "replication"
1503693327.366005 [0 127.0.0.1:20967] "UNSUBSCRIBE" "9\xecQ+\xd7\xc0\xcfD\x96\xe0E9\xacP\xf06"
1503693327.366032 [0 127.0.0.1:18375] "UNSUBSCRIBE" "\xb7\xaem\xc0h\x1a`H\x82\xea\xc6\xa6\xa8\x97a&"
1503693327.647036 [0 127.0.0.1:20284] "INFO" "replication"
1503693327.647090 [0 127.0.0.1:20285] "UNSUBSCRIBE" "\xa3O\xfd\x8a\x8e;?E\x91].\xc6\xb9\xbbc3"
1503693329.429580 [0 127.0.0.1:22990] "UNSUBSCRIBE" " \xde\x94 <\xaa\x16J\x91z4\xf5\x8a\xe5.2"
1503693329.429606 [0 127.0.0.1:22989] "INFO" "replication"
1503693330.380803 [0 127.0.0.1:19747] "INFO" "replication"
1503693330.380878 [0 127.0.0.1:19748] "UNSUBSCRIBE" "piZ\xcf\x8b|II\x98M/\x00\xdaxvR"
1503693330.740826 [0 127.0.0.1:19706] "UNSUBSCRIBE" "\x80\xb7v\xf8#\xb7\x0cD\x9bm\xb1\x9c\xb6f\xc8\xd7"
1503693330.740855 [0 127.0.0.1:19705] "INFO" "replication"
1503693332.412236 [0 127.0.0.1:19385] "INFO" "replication"
1503693332.412292 [0 127.0.0.1:19386] "UNSUBSCRIBE" "\xba\xf7&\xa2\x8c\x13\xadN\xbeV\b\xe9\x9a\x1e\x95\xc6"
1503693332.662329 [0 127.0.0.1:19267] "INFO" "replication"
1503693332.662396 [0 127.0.0.1:19268] "UNSUBSCRIBE" "r\x06\xb0\xaf\xfa\xd7\x8cO\xab\xf8\b/\x1bB+\xd6"
1503693333.006409 [0 127.0.0.1:18250] "INFO" "replication"
1503693333.006462 [0 127.0.0.1:18251] "UNSUBSCRIBE" "\x13\xac(\xb0\xe6s\x9fF\x8d\xf7\x8b\x89\x96\x03\xdd\xa7"

What is causing these events?
One or more clients are sending these commands. Your version of Redis is too old to be really helpful in identifying the "offending" clients - I suggest you upgrade to the latest official version (i.e. not the Windows version)
And can they be suppressed?
The MONITOR command dumps everything. You can postprocess the output if you want to filter things.
Note that monitoring a production instance is potentially dangerous as can affect the server's performance.

Related

Redis monitor command shows the same requests every second

I just set up a Redis client with an Express server so that I can persist user session data in the Redis store. For interest sake, I am monitoring my requests on the cli using the monitor command to see what requests are made through Express. When a user logs in I set a userId key on the req.session object, and the request shows up on the cli:
"set" "sess:2w8OkICwucsO9-18z_ghxA1FLH9GcWpx" "{\"cookie\":{\"originalMaxAge\":3600000,\"expires\":\"2020-10-09T12:09:37.604Z\",\"secure\":false,\"httpOnly\":true,\"path\":\"/\"}}" "EX" "3600"
But after storing the session information, the get and expire commands are shown to be logged on the cli:
1602241780.017805 [0 127.0.0.1:61201] "get" "sess:2w8OkICwucsO9-18z_ghxA1FLH9GcWpx"
1602241780.026601 [0 127.0.0.1:61201] "expire" "sess:2w8OkICwucsO9-18z_ghxA1FLH9GcWpx" "3600"
1602241783.014473 [0 127.0.0.1:61201] "get" "sess:2w8OkICwucsO9-18z_ghxA1FLH9GcWpx"
1602241783.020260 [0 127.0.0.1:61201] "expire" "sess:2w8OkICwucsO9-18z_ghxA1FLH9GcWpx" "3600"
1602241786.018502 [0 127.0.0.1:61201] "get" "sess:2w8OkICwucsO9-18z_ghxA1FLH9GcWpx"
1602241786.024512 [0 127.0.0.1:61201] "expire" "sess:2w8OkICwucsO9-18z_ghxA1FLH9GcWpx" "3600"
1602241789.018028 [0 127.0.0.1:61201] "get" "sess:2w8OkICwucsO9-18z_ghxA1FLH9GcWpx"
1602241789.023479 [0 127.0.0.1:61201] "expire" "sess:2w8OkICwucsO9-18z_ghxA1FLH9GcWpx" "3600"
And this continues like this every second..
I am pretty sure that I am not constantly calling any functions through Express, so why does the monitor command show these requests?
The problem was GraphQL Playground that performs an introspection query every 2 seconds. I disabled the setting in the settings tab and it worked!

Redis reaching max clients immediately on startup

We're having an issue hitting the max number of clients immediately after starting redis. When issuing a MONITOR command, we see thousands of INFO commands issued from our master server.
It seems to be baselining around 9000 connections most of the time. This will occasionally drop to a more normal value for our server for a couple of seconds, then it will immediately spike back to the ~9000 connections.
Anytime redis gets busy during the normal business day, we are hitting our max connections and our services start failing.
When I run the MONITOR command, this is a sample of what I see.
1551452385.425215 [0 192.168.100.161:54068] "info"
1551452385.425556 [0 192.168.100.161:54066] "info"
1551452385.425891 [0 192.168.100.161:54071] "info"
1551452385.426242 [0 192.168.100.161:54069] "info"
1551452385.426587 [0 192.168.100.161:54070] "info"
1551452385.426933 [0 192.168.100.161:54072] "info"
1551452385.427281 [0 192.168.100.161:54074] "info"
1551452385.427625 [0 192.168.100.161:54075] "info"
1551452385.427972 [0 192.168.100.161:54076] "info"
1551452385.428316 [0 192.168.100.161:54077] "info"
1551452385.428670 [0 192.168.100.161:54078] "info"
1551452385.429011 [0 192.168.100.161:54079] "info"
1551452385.429359 [0 192.168.100.161:54080] "info"
1551452385.429706 [0 192.168.100.161:54081] "info"
1551452385.430051 [0 192.168.100.161:54082] "info"
1551452385.430398 [0 192.168.100.161:54083] "info"
1551452385.430741 [0 192.168.100.161:54084] "info"
1551452385.431086 [0 192.168.100.161:54085] "info"
1551452385.431454 [0 192.168.100.161:54086] "info"
1551452385.431792 [0 192.168.100.161:54087] "info"
Our redis.conf is below.
daemonize yes
pidfile "/var/run/redis/redis.pid"
port 6379
tcp-backlog 2048
unixsocket "/tmp/redis.sock"
unixsocketperm 777
timeout 90
tcp-keepalive 30
loglevel notice
logfile "/var/log/redis/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
dir "/var/lib/redis"
slave-serve-stale-data yes
repl-ping-slave-period 5
maxclients 10208
slave-read-only yes
repl-disable-tcp-nodelay no
maxmemory-policy noeviction
appendonly yes
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
lua-time-limit 15000
slowlog-log-slower-than 10000
slowlog-max-len 1024
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-entries 512
list-max-ziplist-value 64
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
slave-priority 1
slaveof 192.168.100.161 6379
Our INFO output is below.
# Server
redis_version:3.0.5
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:cfc7e460e931db7b
redis_mode:standalone
os:Linux 2.6.32-573.8.1.el6.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.7
process_id:14289
run_id:e809f42198d0a568cc3394cee322a20c069ed682
tcp_port:6379
uptime_in_seconds:35562
uptime_in_days:0
hz:10
lru_clock:7947917
config_file:/etc/redis.conf
# Clients
connected_clients:9399
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:5357045872
used_memory_human:4.99G
used_memory_rss:5606625280
used_memory_peak:5664138480
used_memory_peak_human:5.28G
used_memory_lua:36864
mem_fragmentation_ratio:1.05
mem_allocator:jemalloc-3.6.0
# Persistence
loading:0
rdb_changes_since_last_save:42
rdb_bgsave_in_progress:0
rdb_last_save_time:1551451644
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:23
rdb_current_bgsave_time_sec:-1
aof_enabled:1
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:22
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_current_size:4453038609
aof_base_size:4448482140
aof_pending_rewrite:0
aof_buffer_length:0
aof_rewrite_buffer_length:0
aof_pending_bio_fsync:0
aof_delayed_fsync:1
# Stats
total_connections_received:3677782
total_commands_processed:4176358
instantaneous_ops_per_sec:12
total_net_input_bytes:6261124496
total_net_output_bytes:11824027791
instantaneous_input_kbps:1.50
instantaneous_output_kbps:6.87
rejected_connections:3662459
sync_full:2
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:13406
keyspace_misses:10
pubsub_channels:1
pubsub_patterns:0
latest_fork_usec:104081
migrate_cached_sockets:0
# Replication
role:slave
master_host:192.168.100.161
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:26797222
slave_priority:1
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:7529949
repl_backlog_histlen:10344
# CPU
used_cpu_sys:326.54
used_cpu_user:1835.05
used_cpu_sys_children:303.96
used_cpu_user_children:2131.36
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=4182233,expires=1822,avg_ttl=1565571347
db4:keys=1,expires=0,avg_ttl=0
db9:keys=9957,expires=0,avg_ttl=0
db15:keys=386985,expires=0,avg_ttl=0

Redis Slave Sync's but does not continue replication

I have a Redis v3.0.2 Master and Slave
on the slave I issue slaveof (masterIP) 6379
Sync happens, logs looking fine on master and slave. Key counts look sane.
After the sync completes and the slave loads the database, No more operations happen.
running monitor on the master give me hundreds of sets / sec.
the slave only sees a few deletes and an occasional PING
Slave Log:
2734:S 16 Aug 07:23:29.460 * MASTER <-> SLAVE sync: Loading DB in memory
2734:S 16 Aug 07:25:16.531 * MASTER <-> SLAVE sync: Finished with success
Slave Monitor:
~
[119](root#[slave])[0]:\> redis-cli
127.0.0.1:6379> monitor
OK
1534405063.907020 [0 [master]:6379] "PING"
1534405065.409863 [0 [master]:6379] "DEL" "pmlock12"
1534405065.709784 [0 [master]:6379] "DEL" "pmlock22"
1534405065.909400 [0 [master]:6379] "DEL" "pmlock27"
Master Log
2951:C 16 Aug 07:20:57.908 * RDB: 279 MB of memory used by copy-on-write
2745:M 16 Aug 07:20:58.297 * Background saving terminated with success
2745:M 16 Aug 07:22:59.369 * Synchronization with slave 10.168.230.15:6379 succeeded
Master Monitor:
1534405287.136316 [0 [src]:54660] "SET" "CMP36" "{\"m_cur...
1534405252.002731 [0 [src]:45742] "SET" "PM14" "H4sIAAAAAAAAAO1cW4...
Master Info
[209](root#master)[0]:\> redis-cli info replication
# Replication
role:master
connected_slaves:1
slave0:ip=[slave],port=6379,state=online,offset=1747897005,lag=0
master_repl_offset:1748304094
repl_backlog_active:1
repl_backlog_size:104857600
repl_backlog_first_byte_offset:1643446495
repl_backlog_histlen:104857600
I've rebooted the master and slave, i just can't get the master to send through anything but ping and delete. I'm not well versed on redis so i'm sure i'm just missing something.

Can't connect to TFS server from Intellij IDEA 2016.1.3: "Failed to load workspaces: Host contacted, but no TFS service found"

I am having an issue where I am not able to connect to my TFS server from Intellij IDEA 2016.1.3. For the sake of this example, assume that the url to my TFS server is: https://myurlsegment.visualstudio.com. Since I don't have enough reputation to post more than 2 urls, I am going to omit the "https" part from some of the urls in the description below, but rest assured that it is present in the actual url. Also assume that the name of my collection is "mycol". Finally, note that I have enabled alternate authentication credentials for this server from TFS security.
Here are the repro steps from Intellij IDEA:
Go to: VCS->TFS->Edit Configuration
The "Manage TFS Servers and Workspaces" dialog opens, click "Add..."
The "Add Team Foundation Server" dialog opens, fill out the details:
Address: https://myurlsegment.visualstudio.com
Here, I have also tried "://myurlsegment.visualstudio.com/mycol" and "://myurlsegment.visualstudio.com/DefaultCollection" (with https in front)
Auth: Alternate
User name: my microsoft (live) id
Password: password for alternate credentials as specified in Visual Studio Team Services.
Click OK
I get the error message:
"Failed to load workspaces: Host contacted, but no TFS service found"
After this, the server is still added, but with the wrong url. For some reason, Intellij IDEA appends "myurlsegment" to the original url, and I get the following for server name:
://myurlsegment.visualstudio.com/myurlsegment
Instead of this:
://myurlsegment.visualstudio.com/mycol (or ://myurlsegment.visualstudio.com/DefaultCollection)
Of course since I don't have anything under the url:
://myurlsegment.visualstudio.com/myurlsegment, I can't add any workspaces or do anything with this server added in such manner - it's useless.
Any ideas what may be causing this error?
EDIT:
Btw I am able to connect just fine to my TFS server from Visual Studio 2015. I noticed that the url in Visual Studio is indeed shown as:
myurlsegment.visualstudio.com/myurlsegment, so this may not be the problem. I also looked the the IntelliJ IDEA log, and found this:
2016-07-07 08:29:01,021 [ ] DEBUG - httpclient.wire.header - >> "POST /myurlsegment/Services/v1.0/Registration.asmx HTTP/1.1[\r][\n]"
2016-07-07 08:29:01,021 [ ] DEBUG - httpclient.wire.header - >> "Content-Type: application/soap+xml; charset=UTF-8;
action="http://schemas.microsoft.com/TeamFoundation/2005/06/Services/Registration/03/GetRegistrationEntries"[\r][\n]"
2016-07-07 08:29:01,021 [ ] DEBUG - httpclient.wire.header - >> "Authorization: Basic [\r][\n]"
2016-07-07 08:29:01,021 [ ] DEBUG - httpclient.wire.header - >> "User-Agent: Axis2[\r][\n]"
2016-07-07 08:29:01,021 [ ] DEBUG - httpclient.wire.header - >> "Accept-Encoding: gzip[\r][\n]"
2016-07-07 08:29:01,021 [ ] DEBUG - httpclient.wire.header - >> "Host: myurlsegment.visualstudio.com[\r][\n]"
2016-07-07 08:29:01,021 [ ] DEBUG - httpclient.wire.header - >> "Content-Length: 270[\r][\n]"
2016-07-07 08:29:01,021 [ ] DEBUG - httpclient.wire.header - >> "[\r][\n]"
2016-07-07 08:29:01,021 [ ] DEBUG - httpclient.wire.content - >> ""
2016-07-07 08:29:01,721 [ ] DEBUG - httpclient.wire.header - << "HTTP/1.1 404 Not Found[\r][\n]"
2016-07-07 08:29:01,721 [ ] DEBUG - httpclient.wire.header - << "HTTP/1.1 404 Not Found[\r][\n]"
Hope this helps.
I can get the same behavior as you with Ultimate edition. Try the walk around here: IDEA-155939 "Failed to load workspaces: Host contacted, but no TFS service found" while adding "*.visualstudio.com" TFS server. It works at my side.
Close IDEA
Locate tfs servers cache file:
Windows - %USER_HOME%\Local Settings\Application Data\Microsoft\Team Foundation\<VERSION>\Cache\VersionControl.config. <VERSION> could be one of 4.0, 3.0, 2.0, 1.0 values.
Other - <IDEA_OPTIONS_FOLDER>/tfs-servers.xml
Correct uri attribute of corresponding ServerInfo tag from https://<TEAM>.visualstudio.com/<TEAM> value to just https://<TEAM>.visualstudio.com/
Start IDEA
In my case, the correct URL path for the server to use was found in
C:\Users\[yourUserName]\AppData\Local\Microsoft\Team Foundation\5.0\Configuration\VersionControl\LocalItemExclusions.config at the top of the file, in the line starting with
<TeamProjectCollection id=

Cannot lookup EJB bean from remote server

I have Jboss 7 server and SSL is enabled for remote connection and JNDI property as below
java.naming.security.principal=admin, server=10.10.10.10
java.naming.security.credentials=1111
java.naming.provider.url=remote://10.10.10.10:4447
java.naming.factory.initial=org.jboss.naming.remote.client.InitialContextFactory
jboss.naming.client.connect.options.org.xnio.Options.SASL_POLICY_NOPLAINTEXT=false
jboss.naming.client.remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=true
jboss.naming.client.connect.options.org.xnio.Options.SSL_STARTTLS=true
If the client and server has the same IP, the lookup is successfully. However, if I try to loop up from other server that client and server is not the same, it's not failed.
I try to enable debug log of SSL and see some exception:
!ENTRY com.test.my.client.fwk 0 0 2014-11-21 15:04:49.648
!MESSAGE (Timezone is ICT.) ;6340; org.jboss.naming.remote.client.HaRemoteNamingStore logged : "Failed to connect to server remote://192.168.95.111:4447:
java.lang.RuntimeException: Operation failed with status WAITING
at org.jboss.naming.remote.protocol.IoFutureHelper.get(IoFutureHelper.java:89)
at org.jboss.naming.remote.client.HaRemoteNamingStore.failOverSequence(HaRemoteNamingStore.java:193)
at org.jboss.naming.remote.client.HaRemoteNamingStore.namingStore(HaRemoteNamingStore.java:144)
at org.jboss.naming.remote.client.HaRemoteNamingStore.namingOperation(HaRemoteNamingStore.java:125)
at org.jboss.naming.remote.client.HaRemoteNamingStore.lookup(HaRemoteNamingStore.java:241)
at org.jboss.naming.remote.client.RemoteContext.lookup(RemoteContext.java:79)
at org.jboss.naming.remote.client.RemoteContext.lookup(RemoteContext.java:83)
at javax.naming.InitialContext.lookup(InitialContext.java:411)
!MESSAGE (Timezone is ICT.) ;6340; org.jboss.remoting.remote.client logged : "Client authentication failed for mechanism JBOSS-LOCAL-USER: javax.security.sasl.SaslException: Failed to read server challenge [Caused by java.io.FileNotFoundException: \var\opt\ams\local \tmp\auth\local3549659832926743393.challenge (The system cannot find the path specified)]"
!ENTRY com.test.my.client.fwk 0 0 2014-11-21 15:04:50.497
!MESSAGE (Timezone is ICT.) ;6340; org.jboss.remoting.remote.connection logged : "Connection error detail:
java.io.IOException: An established connection was aborted by the software in your host machine
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.xnio.nio.AbstractNioStreamChannel.read(AbstractNioStreamChannel.java:249)
at org.xnio.ssl.JsseConnectedSslStreamChannel.handleUnwrapResult(JsseConnectedSslStreamChannel.java:526)
at org.xnio.ssl.JsseConnectedSslStreamChannel.handleHandshake(JsseConnectedSslStreamChannel.java:396)
at org.xnio.ssl.JsseConnectedSslStreamChannel.doFlush(JsseConnectedSslStreamChannel.java:638)
at org.xnio.ssl.JsseConnectedSslStreamChannel.flushAction(JsseConnectedSslStreamChannel.java:613)
at org.xnio.channels.TranslatingSuspendableChannel.flush(TranslatingSuspendableChannel.java:604)
at org.xnio.channels.FramedMessageChannel.flushAction(FramedMessageChannel.java:275)
at org.xnio.channels.TranslatingSuspendableChannel.flush(TranslatingSuspendableChannel.java:604)
at org.jboss.remoting3.remote.RemoteConnection$RemoteWriteListener.send(RemoteConnection.java:306)
at org.jboss.remoting3.remote.RemoteConnection.send(RemoteConnection.java:124)
at org.jboss.remoting3.remote.RemoteConnectionHandler.sendCloseRequestBody(RemoteConnectionHandler.java:286)
at org.jboss.remoting3.remote.RemoteConnectionProvider$3$1.cancel(RemoteConnectionProvider.java:177)
at org.xnio.AbstractIoFuture.cancel(AbstractIoFuture.java:306)
at org.xnio.AbstractIoFuture.cancel(AbstractIoFuture.java:39)
at org.jboss.remoting3.remote.RemoteConnectionProvider.closeAction(RemoteConnectionProvider.java:236)
at org.jboss.remoting3.spi.AbstractHandleableCloseable.closeAsync(AbstractHandleableCloseable.java:359)
at org.jboss.remoting3.EndpointImpl.closeAction(EndpointImpl.java:204)
at org.jboss.remoting3.spi.AbstractHandleableCloseable.closeAsync(AbstractHandleableCloseable.java:359)
at org.jboss.naming.remote.client.EndpointCache.release(EndpointCache.java:58)
at org.jboss.naming.remote.client.EndpointCache$EndpointWrapper.closeAsync(EndpointCache.java:189)
at org.jboss.naming.remote.client.InitialContextFactory$1.close(InitialContextFactory.java:231)
at org.jboss.naming.remote.client.RemoteContext.finalize(RemoteContext.java:199)
at java.lang.ref.Finalizer.invokeFinalizeMethod(Native Method)
at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:101)
at java.lang.ref.Finalizer.access$100(Finalizer.java:32)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:190)"
!ENTRY com.test.my.client.fwk 0 0 2014-11-21 15:04:50.498
!MESSAGE (Timezone is ICT.) ;6340; org.xnio.safe-close logged : "Closing resource org.xnio.channels.FramedMessageChannel around org.xnio.ssl.JsseConnectedSslStreamChannel around TCP socket channel (NIO) <139ba40>"