I am trying to set up deepstream.io. My goal is to have a 4 docker container:
deepstream
the deepstream search
redis
rethink
Redis as well as rethink are running and are accepting connections. Starting deepstream now states that the cache as well as the storage are not ready. I do not get why and what "dependency description provided" is supposed to tell me.
Why does deepstream not accept the connection?
{
"deepstreamVersion": "3.1.0",
"gitRef": "2557412988b128b3331f6079ff1bd26b0b49302d",
"buildTime": "Mon Sep 25 2017 14:42:10 GMT+0000 (UTC)",
"platform": "linux",
"arch": "x64",
"nodeVersion": "v6.11.3",
"libs": [
"deepstream.io-cache-hazelcast:1.0.2",
"deepstream.io-cache-memcached:1.0.0",
"deepstream.io-cache-redis:1.1.0",
"deepstream.io-logger-winston:1.1.0",
"deepstream.io-storage-elasticsearch:1.0.1",
"deepstream.io-storage-mongodb:1.1.0",
"deepstream.io-storage-postgres:1.1.3",
"deepstream.io-storage-rethinkdb:1.0.2"
]
}
Running deepstream start
_ _
__| | ___ ___ _ __ ___| |_ _ __ ___ __ _ _ __ ____
/ _` |/ _ \/ _ \ '_ \/ __| __| '__/ _ \/ _` | '_ ` _ \
| (_| | __/ __/ |_) \__ \ |_| | | __/ (_| | | | | | |
\__,_|\___|\___| .__/|___/\__|_| \___|\__,_|_| |_| |_|
|_|
===================== starting =====================
INFO | State transition (start): Stopped -> LoggerInit
INFO | logger ready: std out/err
INFO | State transition (logger-started): LoggerInit -> PluginInit
INFO | deepstream version: 3.1.0
INFO | configuration file loaded from /etc/deepstream/config.yml
INFO | library directory set to: /var/lib/deepstream
INFO | authenticationHandler ready: none
INFO | permissionHandler ready: valve permissions loaded from /etc/deepstream/permissions.yml
INFO | cache ready: no dependency description provided
INFO | storage ready: no dependency description provided
INFO | State transition (plugins-started): PluginInit -> ServiceInit
INFO | State transition (services-started): ServiceInit -> ConnectionEndpointInit
iconv-lite warning: javascript files use encoding different from utf-8. See https://github.com/ashtuchkin/iconv-lite/wiki/Javascript-source-file-encodings for more info.
INFO | Listening for websocket connections on 0.0.0.0:6020/deepstream
INFO | Listening for health checks on path /health-check
INFO | connectionEndpoint ready: WebSocket Connection Endpoint
INFO | Listening for http connections on 0.0.0.0:8080
INFO | Listening for health checks on path /health-check
INFO | connectionEndpoint ready: HTTP connection endpoint
INFO | State transition (connection-endpoints-started): ConnectionEndpointInit -> Running
INFO | Deepstream started
The config file:
# General
# Show the deepstream logo on startup (highly recommended)
showLogo: true
# Log messages with this level and above. Valid levels are DEBUG, INFO, WARN, ERROR, OFF
logLevel: DEBUG
# Directory where all plugins reside
libDir: /var/lib/deepstream
# Connectivity
# webfacing URL under which this client is reachable. Used for loadbalancing / failover
externalUrl: null
# SSL Configuration
sslKey: null
sslCert: null
sslCa: null
# Connection Endpoint Configuration
# to disable, replace configuration with null eg. `http: null`
connectionEndpoints:
websocket:
name: uws
options:
# port for the websocket server
port: 6020
# host for the websocket server
host: 0.0.0.0
# url path websocket connections connect to
urlPath: /deepstream
# url path for http health-checks, GET requests to this path will return 200 if deepstream is alive
healthCheckPath: /health-check
# the amount of milliseconds between each ping/heartbeat message
heartbeatInterval: 30000
# the amount of milliseconds that writes to sockets are buffered
outgoingBufferTimeout: 0
# Security
# amount of time a connection can remain open while not being logged in
# or false for no timeout
unauthenticatedClientTimeout: 180000
# invalid login attempts before the connection is cut
maxAuthAttempts: 3
# if true, the logs will contain the cleartext username / password of invalid login attempts
logInvalidAuthData: false
# maximum allowed size of an individual message in bytes
maxMessageSize: 1048576
http:
name: http
options:
# port for the http server
port: 8080
# host for the http server
host: 0.0.0.0
# allow 'authData' parameter in POST requests, if disabled only token and OPEN auth is
# possible
allowAuthData: true
# enable the authentication endpoint for requesting tokens/userData.
# note: a custom authentication handler is required for token generation
enableAuthEndpoint: false
# path for authentication requests
authPath: /auth
# path for POST requests
postPath: /
# path for GET requests
getPath: /
# url path for http health-checks, GET requests to this path will return 200 if deepstream is alive
healthCheckPath: /health-check
# -- CORS --
# if disabled, only requests with an 'Origin' header matching one specified under 'origins'
# below will be permitted and the 'Access-Control-Allow-Credentials' response header will be
# enabled
allowAllOrigins: true
# a list of allowed origins
origins:
- 'https://example.com'
# Logger Configuration
# logger:
# # use either the default logger
# name: default
# options:
# colors: true
# # value of logLevel (line 4) will always overwrite this value
# logLevel: INFO
# # or the winston logger
# name: winston
# options:
# # specify a list of transports (console, file, time)
# -
# type: console
# options:
# # value of logLevel (line 4) will always overwrite this value
# level: info
# colorize: true
# -
# type: time
# options:
# filename: ../var/deepstream
# # or a custom logger
# path: ./my-custom-logger
# Plugin Configuration
plugins:
cache:
name: redis
options:
host: Redis-Redis-1
port: 6379
storage:
name: rethinkdb
options:
host: rethinkdb-rethinkdb-proxy-1
port: 28015
splitChar: /
# Storage options
# a RegExp that matches recordNames. If it matches, the record's data won't be stored in the db
storageExclusion: null
auth:
type: none
# getting permissions from a http webhook
# type: http
# options:
# # a post request will be send to this url on every incoming connection
# endpointUrl: http://localhost:6004
# # any of these will be treated as access granted
# permittedStatusCodes: [ 200 ]
# # if the webhook didn't respond after this amount of milliseconds, the connection will be rejected
# requestTimeout: 2000
# Permissioning
permission:
# Only config or custom permissionHandler at the moment
type: config
options:
# Path to the permissionFile. Can be json, js or yml
path: ./permissions.yml
# Amount of times nested cross-references will be loaded. Avoids endless loops
maxRuleIterations: 3
# PermissionResults are cached to increase performance. Lower number means more loading
cacheEvacuationInterval: 60000
# Timeouts (in milliseconds)
# Timeout for client RPC acknownledgement
rpcAckTimeout: 1000
# Timeout for actual RPC provider response
rpcTimeout: 10000
# Maximum time permitted to fetch from cache
cacheRetrievalTimeout: 1000
# Maximum time permitted to fetch from storage
storageRetrievalTimeout: 2000
# Plugin startup timeout – deepstream init will fail if any plugins fail to emit a 'done' event within this timeout
dependencyInitialisationTimeout: 10000
# The amount of time to wait for a provider to acknowledge or reject a listen request
listenResponseTimeout: 500
# The amount of time a broadcast will wait (to allow broadcast coalescing). -1 means disabled.
broadcastTimeout: 0
# A list of prefixes that, when a record is updated via setData and it matches one of the prefixes
# it will be permissioned and written directly to the cache and storage layers
# storageHotPathPatterns:
# - analytics/
# - metrics/
Redis PING
ping Redis-Redis-1
PING redis-redis-1.rancher.internal (10.42.230.105): 56 data bytes
64 bytes from 10.42.230.105: icmp_seq=0 ttl=62 time=12.676 ms
64 bytes from 10.42.230.105: icmp_seq=1 ttl=62 time=12.751 ms
64 bytes from 10.42.230.105: icmp_seq=2 ttl=62 time=15.441 ms
64 bytes from 10.42.230.105: icmp_seq=3 ttl=62 time=12.838 ms
^C--- redis-redis-1.rancher.internal ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max/stddev = 12.676/13.427/15.441/1.164 ms
The message no dependency description provided just means that under the hood, the connector has no description property.
I'd recommend trying to set some data via a deepstream client and see if it is written to the database.
Related
I have configured Verdaccio on my local machine for testing. Below is my configuration,
#
# This is the default configuration file. It allows all users to do anything,
# please read carefully the documentation and best practices to
# improve security.
#
# Look here for more config file examples:
# https://github.com/verdaccio/verdaccio/tree/5.x/conf
#
# Read about the best practices
# https://verdaccio.org/docs/best
# path to a directory with all packages
storage: /verdaccio/storage/data
# path to a directory with plugins to include
plugins: /verdaccio/plugins
# https://verdaccio.org/docs/webui
# https://verdaccio.org/docs/configuration#uplinks
# a list of other known repositories we can talk to
uplinks:
npmjs:
url: https://registry.npmjs.org/
cache: false
# https://verdaccio.org/docs/configuration#authentication
auth:
htpasswd:
file: /verdaccio/htpasswd
# Learn how to protect your packages
# https://verdaccio.org/docs/protect-your-dependencies/
# https://verdaccio.org/docs/configuration#packages
packages:
'#mycompany/*':
access: $authenticated
publish: $authenticated
unpublish: $authenticated
'#*/*':
# scoped packages
access: $all
publish: $authenticated
unpublish: $authenticated
proxy: npmjs
'**':
access: $all
publish: $authenticated
unpublish: $authenticated
# publish: azuread
# unpublish: azuread
# if package is not available locally, proxy requests to 'npmjs' registry
proxy: npmjs
# To improve your security configuration and avoid dependency confusion
# consider removing the proxy property for private packages
# https://verdaccio.org/docs/best#remove-proxy-to-increase-security-at-private-packages
# https://verdaccio.org/docs/configuration#server
# You can specify HTTP/1.1 server keep alive timeout in seconds for incoming connections.
# A value of 0 makes the http server behave similarly to Node.js versions prior to 8.0.0, which did not have a keep-alive timeout.
# WORKAROUND: Through given configuration you can workaround following issue https://github.com/verdaccio/verdaccio/issues/301. Set to 0 in case 60 is not enough.
server:
keepAliveTimeout: 60
# Allow `req.ip` to resolve properly when Verdaccio is behind a proxy or load-balancer
# See: https://expressjs.com/en/guide/behind-proxies.html
# trustProxy: '127.0.0.1'
# https://verdaccio.org/docs/configuration#offline-publish
# publish:
# allow_offline: false
# https://verdaccio.org/docs/configuration#url-prefix
# url_prefix: /verdaccio/
# VERDACCIO_PUBLIC_URL='https://somedomain.org';
# url_prefix: '/my_prefix'
# // url -> https://somedomain.org/my_prefix/
# VERDACCIO_PUBLIC_URL='https://somedomain.org';
# url_prefix: '/'
# // url -> https://somedomain.org/
# VERDACCIO_PUBLIC_URL='https://somedomain.org/first_prefix';
# url_prefix: '/second_prefix'
# // url -> https://somedomain.org/second_prefix/'
# https://verdaccio.org/docs/configuration#security
# security:
# api:
# legacy: true
# jwt:
# sign:
# expiresIn: 29d
# verify:
# someProp: [value]
# web:
# sign:
# expiresIn: 1h # 1 hour by default
# verify:
# someProp: [value]
# https://verdaccio.org/docs/configuration#user-rate-limit
# userRateLimit:
# windowMs: 50000
# max: 1000
# https://verdaccio.org/docs/configuration#max-body-size
# max_body_size: 10mb
# https://verdaccio.org/docs/configuration#listen-port
# listen:
# - localhost:4873 # default value
# - http://localhost:4873 # same thing
# - 0.0.0.0:4873 # listen on all addresses (INADDR_ANY)
# - https://example.org:4873 # if you want to use https
# - "[::1]:4873" # ipv6
# - unix:/tmp/verdaccio.sock # unix socket
# The HTTPS configuration is useful if you do not consider use a HTTP Proxy
# https://verdaccio.org/docs/configuration#https
# https:
# key: ./path/verdaccio-key.pem
# cert: ./path/verdaccio-cert.pem
# ca: ./path/verdaccio-csr.pem
# https://verdaccio.org/docs/configuration#proxy
# http_proxy: http://something.local/
# https_proxy: https://something.local/
# https://verdaccio.org/docs/configuration#notifications
# notify:
# method: POST
# headers: [{ "Content-Type": "application/json" }]
# endpoint: https://usagge.hipchat.com/v2/room/3729485/notification?auth_token=mySecretToken
# content: '{"color":"green","message":"New package published: * {{ name }}*","notify":true,"message_format":"text"}'
middlewares:
audit:
enabled: true
# https://verdaccio.org/docs/logger
# log settings
logs: { type: stdout, format: pretty, level: http }
#experiments:
# # support for npm token command
# token: false
# # disable writing body size to logs, read more on ticket 1912
# bytesin_off: false
# # enable tarball URL redirect for hosting tarball with a different server, the tarball_url_redirect can be a template string
# tarball_url_redirect: 'https://mycdn.com/verdaccio/${packageName}/${filename}'
# # the tarball_url_redirect can be a function, takes packageName and filename and returns the url, when working with a js configuration file
# tarball_url_redirect(packageName, filename) {
# const signedUrl = // generate a signed url
# return signedUrl;
# }
# translate your registry, api i18n not available yet
# i18n:
# list of the available translations https://github.com/verdaccio/verdaccio/blob/master/packages/plugins/ui-theme/src/i18n/ABOUT_TRANSLATIONS.md
# web: en-US
# minio configuration
store:
minio:
# The HTTP port of your minio instance
port: 9000
# The endpoint on which verdaccio will access minio (without scheme)
endPoint: 172.17.0.4
# The minio access key
accessKey: ***
# The minio secret key
secretKey: *****
# Disable SSL if you're accessing minio directly through HTTP
useSSL: false
# The region used by your minio instance (optional, default to "us-east-1")
# region: eu-west-1
# A bucket where verdaccio will store it's database & packages (optional, default to "verdaccio")
bucket: 'npm'
# Number of retry when a request to minio fails (optional, default to 10)
retries: 3
# Delay between retries (optional, default to 100)
delay: 50
I am able to login and I can publish and pull private packages. However, whenever I try to pull any package which is not present on my machine, and it gets pulled from registry.npmjs.org I get a warning which states that tarball data seems to be corrupted. Trying again. for any random package and then the command crashes with ERR: CODE EINTEGRITY, sha256:****
I am not able to figure this out.
i've setup SSL on my local Kafka instance, and when i start the Kafka console producer/consumer on SSL port, it is giving SSL Handshake error
Karans-MacBook-Pro:keystore karanalang$ $CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
>[2021-11-10 13:15:09,824] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2021-11-10 13:15:09,826] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2021-11-10 13:15:10,018] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2021-11-10 13:15:10,019] WARN [Producer clientId=console-producer] Bootstrap broker localhost:9093 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2021-11-10 13:15:10,195] ERROR [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
Here are the changes made :
create the truststore & keystore
Here is output of the openssl command to check the SSL connectivity :
Karans-MacBook-Pro:keystore karanalang$ openssl s_client -debug -connect localhost:9093 -tls1
CONNECTED(00000005)
write to 0x13d7bdf90 [0x13e01ea03] (118 bytes => 118 (0x76))
0000 - 16 03 01 00 71 01 00 00-6d 03 01 81 e8 00 cd c4 ....q...m.......
0010 - 04 4b 64 86 3e 30 97 32-c3 66 3a 8c ed 05 bf 97 .Kd.>0.2.f:.....
0020 - ff d5 b2 a4 26 fe 99 c0-7f 94 a1 00 00 2e c0 14 ....&...........
0030 - c0 0a 00 39 ff 85 00 88-00 81 00 35 00 84 c0 13 ...9.......5....
---
0076 - <SPACES/NULS>
read from 0x13d7bdf90 [0x13e01a803] (5 bytes => 5 (0x5))
0005 - <SPACES/NULS>
4307385836:error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number:/System/Volumes/Data/SWE/macOS/BuildRoots/e90674e518/Library/Caches/com.apple.xbs/Sources/libressl/libressl-56.60.2/libressl-2.8/ssl/ssl_pkt.c:386:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 5 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Start Time: 1636579015
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
Here is the server.properties :
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
# SSL CHANGE
listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
# SSL CHANGE
advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093
ssl.client.auth=none
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000
##################### Confluent Metrics Reporter #######################
# Confluent Control Center and Confluent Auto Data Balancer integration
#
# Uncomment the following lines to publish monitoring data for
# Confluent Control Center and Confluent Auto Data Balancer
# If you are using a dedicated metrics cluster, also adjust the settings
# to point to your metrics kakfa cluster.
#metric.reporters=io.confluent.metrics.reporter.ConfluentMetricsReporter
#confluent.metrics.reporter.bootstrap.servers=localhost:9092
#
# Uncomment the following line if the metrics cluster has a single broker
#confluent.metrics.reporter.topic.replicas=1
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
############################# Confluent Authorizer Settings #############################
# Uncomment to enable Confluent Authorizer with support for ACLs, LDAP groups and RBAC
#authorizer.class.name=io.confluent.kafka.security.authorizer.ConfluentServerAuthorizer
# Semi-colon separated list of super users in the format <principalType>:<principalName>
#super.users=
# Specify a valid Confluent license. By default free-tier license will be used
#confluent.license=
# Replication factor for the topic used for licensing. Default is 3.
confluent.license.topic.replication.factor=1
# Uncomment the following lines and specify values where required to enable CONFLUENT provider for RBAC and centralized ACLs
# Enable CONFLUENT provider
#confluent.authorizer.access.rule.providers=ZK_ACL,CONFLUENT
# Bootstrap servers for RBAC metadata. Must be provided if this broker is not in the metadata cluster
#confluent.metadata.bootstrap.servers=PLAINTEXT://127.0.0.1:9092
# Replication factor for the metadata topic used for authorization. Default is 3.
confluent.metadata.topic.replication.factor=1
# Replication factor for the topic used for audit logs. Default is 3.
confluent.security.event.logger.exporter.kafka.topic.replicas=1
# Listeners for metadata server
#confluent.metadata.server.listeners=http://0.0.0.0:8090
# Advertised listeners for metadata server
#confluent.metadata.server.advertised.listeners=http://127.0.0.1:8090
############################# Confluent Data Balancer Settings #############################
# The Confluent Data Balancer is used to measure the load across the Kafka cluster and move data
# around as necessary. Comment out this line to disable the Data Balancer.
confluent.balancer.enable=true
# By default, the Data Balancer will only move data when an empty broker (one with no partitions on it)
# is added to the cluster or a broker failure is detected. Comment out this line to allow the Data
# Balancer to balance load across the cluster whenever an imbalance is detected.
#confluent.balancer.heal.uneven.load.trigger=ANY_UNEVEN_LOAD
# The default time to declare a broker permanently failed is 1 hour (3600000 ms).
# Uncomment this line to turn off broker failure detection, or adjust the threshold
# to change the duration before a broker is declared failed.
#confluent.balancer.heal.broker.failure.threshold.ms=-1
# Edit and uncomment the following line to limit the network bandwidth used by data balancing operations.
# This value is in bytes/sec/broker. The default is 10MB/sec.
#confluent.balancer.throttle.bytes.per.second=10485760
# Capacity Limits -- when set to positive values, the Data Balancer will attempt to keep
# resource usage per-broker below these limits.
# Edit and uncomment this line to limit the maximum number of replicas per broker. Default is unlimited.
#confluent.balancer.max.replicas=10000
# Edit and uncomment this line to limit what fraction of the log disk (0-1.0) is used before rebalancing.
# The default (below) is 85% of the log disk.
#confluent.balancer.disk.max.load=0.85
# Edit and uncomment these lines to define a maximum network capacity per broker, in bytes per
# second. The Data Balancer will attempt to ensure that brokers are using less than this amount
# of network bandwidth when rebalancing.
# Here, 10MB/s. The default is unlimited capacity.
#confluent.balancer.network.in.max.bytes.per.second=10485760
#confluent.balancer.network.out.max.bytes.per.second=10485760
# Edit and uncomment this line to identify specific topics that should not be moved by the data balancer.
# Removal operations always move topics regardless of this setting.
#confluent.balancer.exclude.topic.names=
# Edit and uncomment this line to identify topic prefixes that should not be moved by the data balancer.
# (For example, a "confluent.balancer" prefix will match all of "confluent.balancer.a", "confluent.balancer.b",
# "confluent.balancer.c", and so on.)
# Removal operations always move topics regardless of this setting.
#confluent.balancer.exclude.topic.prefixes=
# The replication factor for the topics the Data Balancer uses to store internal state.
# For anything other than development testing, a value greater than 1 is recommended to ensure availability.
# The default value is 3.
confluent.balancer.topic.replication.factor=1
################################## Confluent Telemetry Settings ##################################
# To start using Telemetry, first generate a Confluent Cloud API key/secret. This can be done with
# instructions at https://docs.confluent.io/current/cloud/using/api-keys.html. Note that you should
# be using the '--resource cloud' flag.
#
# After generating an API key/secret, to enable Telemetry uncomment the lines below and paste
# in your API key/secret.
#
#confluent.telemetry.enabled=true
#confluent.telemetry.api.key=<CLOUD_API_KEY>
#confluent.telemetry.api.secret=<CCLOUD_API_SECRET>
############ SSL #################
ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
ssl.truststore.password=test123
ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
ssl.keystore.password=test123
ssl.key.password=test123
# confluent.metrics.reporter.bootstrap.servers=localhost:9093
# confluent.metrics.reporter.security.protocol=SSL
# confluent.metrics.reporter.ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
# confluent.metrics.reporter.ssl.truststore.password=test123
# confluent.metrics.reporter.ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
# confluent.metrics.reporter.ssl.keystore.password=test123
# confluent.metrics.reporter.ssl.key.password=test123
client-ssl.properties:
bootstrap.servers=localhost:9093
security.protocol=SSL
ssl.truststore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/truststore/kafka.truststore.jks
ssl.truststore.password=test123
ssl.keystore.location=/Users/karanalang/Documents/Technology/confluent-6.2.1/ssl_certs/keystore/kafka.keystore.jks
ssl.keystore.password=test123
ssl.key.password=test123
Commands to start the Console Producer/Consumer :
$CONFLUENT_HOME/bin/kafka-console-producer --broker-list localhost:9093 --topic karantest --producer.config $CONFLUENT_HOME/props/client-ssl.properties
$CONFLUENT_HOME/bin/kafka-console-consumer --bootstrap-server localhost:9093 --topic karantest --consumer.config $CONFLUENT_HOME/props/client-ssl.properties --from-beginning
Any ideas on how to resolve this ?
Update :
This is the error when i try to debug (using - export KAFKA_OPTS=-Djavax.net.debug=all)
javax.net.ssl|DEBUG|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.107 PST|SSLExtensions.java:173|Ignore unavailable extension: status_request
javax.net.ssl|DEBUG|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.107 PST|SSLExtensions.java:173|Ignore unavailable extension: status_request
javax.net.ssl|ERROR|0E|kafka-producer-network-thread | console-producer|2021-11-10 14:04:26.108 PST|TransportContext.java:341|Fatal (CERTIFICATE_UNKNOWN): No name matching localhost found (
"throwable" : {
java.security.cert.CertificateException: No name matching localhost found
at java.base/sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:234)
at java.base/sun.security.util.HostnameChecker.match(HostnameChecker.java:103)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:429)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:283)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:141)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1335)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1232)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1175)
at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:443)
at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1074)
at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1061)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:1008)
at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:509)
at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:601)
at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:447)
at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:332)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:229)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:563)
at org.apache.kafka.common.network.Selector.poll(Selector.java:499)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:639)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:327)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:242)
at java.base/java.lang.Thread.run(Thread.java:829)}
Adding the following in client-ssl.properties resolved the issue:
ssl.endpoint.identification.algorithm=
This setting means the certificate does not match the hostname of the machine you are using to run the consumer. That seems to be recommended approach in this case.
Related thread:
Kafka java consumer SSL handshake Error : java.security.cert.CertificateException: No subject alternative names present
Try to set identification algorithm for for producer and consumer also.
ssl.endpoint.identification.algorithm=
producer.ssl.endpoint.identification.algorithm=
consumer.ssl.endpoint.identification.algorithm=
Check if you have connection or problems.
You can test access with:
openssl s_client -debug -connect servername:port -tls1_2
Answer must be: "Verify return code: 0 (ok)
In other case you have no access.
can anyone help me to set streaming by telegraf into cloud InfluxDB? I use this tutorial, python script launches on my local machine and it pushing notification into rabbitMQ. Telegraf subscribed to rabbitMQ by this config.
# Configuration for telegraf agent
[agent]
## Default data collection interval for all inputs
interval = "10s"
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
## Telegraf will send metrics to outputs in batches of at most
## metric_batch_size metrics.
## This controls the size of writes that Telegraf sends to output plugins.
metric_batch_size = 1000
## For failed writes, telegraf will cache metric_buffer_limit metrics for each
## output, and will flush this buffer on a successful write. Oldest metrics
## are dropped first when this buffer fills.
## This buffer only fills when writes fail to output plugin(s).
metric_buffer_limit = 10000
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "0s"
## Default flushing interval for all outputs. Maximum flush_interval will be
## flush_interval + flush_jitter
flush_interval = "10s"
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
## By default or when set to "0s", precision will be set to the same
## timestamp order as the collection interval, with the maximum being 1s.
## ie, when interval = "10s", precision will be "1s"
## when interval = "250ms", precision will be "1ms"
## Precision will NOT be used for service inputs. It is up to each individual
## service input to set the timestamp at the appropriate precision.
## Valid time units are "ns", "us" (or "µs"), "ms", "s".
precision = ""
## Logging configuration:
## Run telegraf with debug log messages.
debug = true
## Run telegraf in quiet mode (error log messages only).
quiet = false
## Specify the log file name. The empty string means to log to stderr.
logfile = ""
## Override default hostname, if empty use os.Hostname()
hostname = ""
## If set to true, do no set the "host" tag in the telegraf agent.
omit_hostname = false
[[outputs.influxdb_v2]]
## The URLs of the InfluxDB cluster nodes.
##
## Multiple URLs can be specified for a single cluster, only ONE of the
## urls will be written to each interval.
## urls exp: http://127.0.0.1:9999
urls = ["https://eu-central-1-1.aws.cloud2.influxdata.com"]
## Token for authentication.
token = "$INFLUX_TOKEN"
## Organization is the name of the organization you wish to write to; must exist.
organization = "some#gmail.com"
## Destination bucket to write into.
bucket = "two"
[[inputs.cpu]]
## Whether to report per-cpu stats or not
percpu = true
## Whether to report total system cpu stats or not
totalcpu = true
## If true, collect raw CPU time metrics.
collect_cpu_time = false
## If true, compute and report the sum of all non-idle CPU states.
report_active = false
[[inputs.disk]]
## By default stats will be gathered for all mount points.
## Set mount_points will restrict the stats to only the specified mount points.
# mount_points = ["/"]
## Ignore mount points by filesystem type.
ignore_fs = ["tmpfs", "devtmpfs", "devfs", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
# # Reads metrics from RabbitMQ servers via the Management Plugin
[[inputs.rabbitmq]]
# ## Management Plugin url. (default: http://localhost:15672)
url = "http://localhost:15672"
# ## Tag added to rabbitmq_overview series; deprecated: use tags
# # name = "rmq-server-1"
# ## Credentials
username = "guest"
password = "guest"
#
# ## Optional TLS Config
# # tls_ca = "/etc/telegraf/ca.pem"
# # tls_cert = "/etc/telegraf/cert.pem"
# # tls_key = "/etc/telegraf/key.pem"
# ## Use TLS but skip chain & host verification
# # insecure_skip_verify = false
#
# ## Optional request timeouts
# ##
# ## ResponseHeaderTimeout, if non-zero, specifies the amount of time to wait
# ## for a server's response headers after fully writing the request.
header_timeout = "3s"
# ##
# ## client_timeout specifies a time limit for requests made by this client.
# ## Includes connection time, any redirects, and reading the response body.
client_timeout = "4s"
#
# ## A list of nodes to gather as the rabbitmq_node measurement. If not
# ## specified, metrics for all nodes are gathered.
# # nodes = ["rabbit#node1", "rabbit#node2"]
#
# ## A list of queues to gather as the rabbitmq_queue measurement. If not
# ## specified, metrics for all queues are gathered.
# # queues = ["telegraf"]
#
# ## A list of exchanges to gather as the rabbitmq_exchange measurement. If no
[[inputs.mqtt_consumer]]
name_prefix = "influx"
servers = ["tcp://rabbitmq:1883"]
qos = 0
connection_timeout = "30s"
topics = [
"crypto/btc",
# "crypto/eth",
]
persistent_session = false
client_id = ""
data_format = "json"
json_string_fields
Logs show that data is writing into influxdb cloud
2020-02-25T16:01:53Z I! Starting Telegraf 1.13.3
2020-02-25T16:01:53Z I! Loaded inputs: mqtt_consumer disk diskio net system rabbitmq cpu mem processes swap
2020-02-25T16:01:53Z I! Loaded aggregators:
2020-02-25T16:01:53Z I! Loaded processors:
2020-02-25T16:01:53Z I! Loaded outputs: influxdb_v2
2020-02-25T16:01:53Z I! Tags enabled: host=dos4dev
2020-02-25T16:01:53Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"dos4dev", Flush Interval:10s
2020-02-25T16:01:53Z D! [agent] Initializing plugins
2020-02-25T16:01:53Z D! [agent] Connecting outputs
2020-02-25T16:01:53Z D! [agent] Attempting connection to [outputs.influxdb_v2]
2020-02-25T16:01:53Z D! [agent] Successfully connected to outputs.influxdb_v2
2020-02-25T16:01:53Z D! [agent] Starting service inputs
2020-02-25T16:02:00Z D! [inputs.mqtt_consumer] Connecting [tcp://rabbitmq:1883]
2020-02-25T16:02:10Z D! [inputs.mqtt_consumer] Connecting [tcp://rabbitmq:1883]
2020-02-25T16:02:10Z D! [outputs.influxdb_v2] Wrote batch of 78 metrics in 595.462779ms
2020-02-25T16:02:10Z D! [outputs.influxdb_v2] Buffer fullness: 83 / 10000 metrics
2020-02-25T16:02:20Z D! [inputs.mqtt_consumer] Connecting [tcp://rabbitmq:1883]
2020-02-25T16:02:20Z D! [outputs.influxdb_v2] Wrote batch of 83 metrics in 344.265787ms
2020-02-25T16:02:20Z D! [outputs.influxdb_v2] Buffer fullness: 83 / 10000 metrics
2020-02-25T16:02:30Z D! [inputs.mqtt_consumer] Connecting [tcp://rabbitmq:1883]
But I can`t find data in cloud Influxdb
Based on the log messages from Telegraf, it looks like the data is being written. Have you tried following the docs for exploring your data? https://v2.docs.influxdata.com/v2.0/visualize-data/explore-metrics/
I am trying to setup redis-ha helm chart on my local kubernetes (docker for windows).
helm values file I am using is,
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
image:
repository: redis
tag: 5.0.3-alpine
pullPolicy: IfNotPresent
## replicas number for each component
replicas: 3
## Custom labels for the redis pod
labels: {}
## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: false
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the redis-ha.fullname template
# name:
## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
create: false
## Redis specific configuration options
redis:
port: 6379
masterGroupName: mymaster
config:
## Additional redis conf options can be added below
## For all available options see http://download.redis.io/redis-stable/redis.conf
min-slaves-to-write: 1
min-slaves-max-lag: 5 # Value in seconds
maxmemory: "0" # Max memory to use for each redis instance. Default is unlimited.
maxmemory-policy: "volatile-lru" # Max memory policy to use for each redis instance. Default is volatile-lru.
# Determines if scheduled RDB backups are created. Default is false.
# Please note that local (on-disk) RDBs will still be created when re-syncing with a new slave. The only way to prevent this is to enable diskless replication.
save: "900 1"
# When enabled, directly sends the RDB over the wire to slaves, without using the disk as intermediate storage. Default is false.
repl-diskless-sync: "yes"
rdbcompression: "yes"
rdbchecksum: "yes"
## Custom redis.conf files used to override default settings. If this file is
## specified then the redis.config above will be ignored.
# customConfig: |-
# Define configuration here
resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 700Mi
cpu: 250m
## Sentinel specific configuration options
sentinel:
port: 26379
quorum: 2
config:
## Additional sentinel conf options can be added below. Only options that
## are expressed in the format simialar to 'sentinel xxx mymaster xxx' will
## be properly templated.
## For available options see http://download.redis.io/redis-stable/sentinel.conf
down-after-milliseconds: 10000
## Failover timeout value in milliseconds
failover-timeout: 180000
parallel-syncs: 5
## Custom sentinel.conf files used to override default settings. If this file is
## specified then the sentinel.config above will be ignored.
# customConfig: |-
# Define configuration here
resources:
requests:
memory: 200Mi
cpu: 100m
limits:
memory: 200Mi
cpu: 250m
securityContext:
runAsUser: 1000
fsGroup: 1000
runAsNonRoot: true
## Node labels, affinity, and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Prometheus exporter specific configuration options
exporter:
enabled: false
image: oliver006/redis_exporter
tag: v0.31.0
pullPolicy: IfNotPresent
# prometheus port & scrape path
port: 9121
scrapePath: /metrics
# cpu/memory resource limits/requests
resources: {}
# Additional args for redis exporter
extraArgs: {}
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 1
## Configures redis with AUTH (requirepass & masterauth conf params)
auth: false
# redisPassword:
## Use existing secret containing "auth" key (ignores redisPassword)
# existingSecret:
persistentVolume:
enabled: true
## redis-ha data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
accessModes:
- ReadWriteOnce
size: 1Gi
annotations: {}
init:
resources: {}
# To use a hostPath for data, set persistentVolume.enabled to false
# and define hostPath.path.
# Warning: this might overwrite existing folders on the host system!
hostPath:
## path is evaluated as template so placeholders are replaced
# path: "/data/{{ .Release.Name }}"
# if chown is true, an init-container with root permissions is launched to
# change the owner of the hostPath folder to the user defined in the
# security context
chown: true
redis-ha is getting deployed correctly and when I do kubectl get all,
NAME READY STATUS RESTARTS AGE
pod/rc-redis-ha-server-0 2/2 Running 0 1h
pod/rc-redis-ha-server-1 2/2 Running 0 1h
pod/rc-redis-ha-server-2 2/2 Running 0 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d
service/rc-redis-ha ClusterIP None <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-0 ClusterIP 10.105.187.154 <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-1 ClusterIP 10.107.36.58 <none> 6379/TCP,26379/TCP 1h
service/rc-redis-ha-announce-2 ClusterIP 10.98.38.214 <none> 6379/TCP,26379/TCP 1h
NAME DESIRED CURRENT AGE
statefulset.apps/rc-redis-ha-server 3 3 1h
I try to access the redis-ha using Java application, which uses lettuce driver to connect to redis. Sample java code to access redis,
package io.c12.bala.lettuce;
import io.lettuce.core.RedisClient;
import io.lettuce.core.api.StatefulRedisConnection;
import io.lettuce.core.api.sync.RedisCommands;
import java.util.logging.Logger;
public class RedisClusterConnect {
private static final Logger logger = Logger.getLogger(RedisClusterConnect.class.getName());
public static void main(String[] args) {
logger.info("Starting test");
// Syntax: redis-sentinel://[password#]host[:port][,host2[:port2]][/databaseNumber]#sentinelMasterId
RedisClient redisClient = RedisClient.create("redis-sentinel://rc-redis-ha:26379/0#mymaster");
StatefulRedisConnection<String, String> connection = redisClient.connect();
RedisCommands<String, String> command = connection.sync();
command.set("Hello", "World");
logger.info("Ran set command successfully");
logger.info("Value from Redis - " + command.get("Hello"));
connection.close();
redisClient.shutdown();
}
}
I packaged the application as runnable jar, created a container and pushed it to same kubernetes cluster where redis is running. The application now throws an error.
Exception in thread "main" io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:122)
at io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:69)
at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
at com.sun.proxy.$Proxy0.set(Unknown Source)
at io.c12.bala.lettuce.RedisClusterConnect.main(RedisClusterConnect.java:22)
Caused by: io.lettuce.core.RedisCommandExecutionException: NOREPLICAS Not enough good replicas to write.
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:108)
at io.lettuce.core.protocol.AsyncCommand.completeResult(AsyncCommand.java:120)
at io.lettuce.core.protocol.AsyncCommand.complete(AsyncCommand.java:111)
at io.lettuce.core.protocol.CommandHandler.complete(CommandHandler.java:646)
at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:604)
at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:556)
I tried with jedis driver too, and with springboot application, getting the same error from the Redis-ha cluster.
** UPDATE **
when I run info command inside redis-cli, I am getting getting
connected_slaves:2
min_slaves_good_slaves:0
Seems the Slaves are not behaving properly. When switched to min-slaves-to-write: 0. Able to read and Write to Redis Cluster.
Any help on this is appreciated.
Seems that you have to edit redis-ha-configmap configmap and set min-slaves-to-write 0.
After all redis pod deletion (to apply it) it works like a charm
so :
helm install stable/redis-ha
kubectl edit cm redis-ha-configmap # change min-slaves-to-write from 1 to 0
kubectl delete pod redis-ha-0
If you deploying this Helm chart locally on your computer, you only have 1 node available. If you install the Helm chart with --set hardAntiAffinity=false then it will put the required replica pods all on the same node and thus will startup correctly and not give you that error. This hardAntiAffinity value has a documented default of true:
Whether the Redis server pods should be forced to run on separate nodes.
When I deployed the helm chart with same values to Kubernetes cluster running on AWS, it works fine.
Seems issue with Kubernetes on Docker for Windows.
We are monitoring several servers with Monit. We are using the version 5.25.1.
Some are dedicated apache servers. The monitoring is ok.
But the log of monit (/var/log/monit) is like this :
[CET Mar 18 03:12:03] info : Starting Monit 5.25.1 daemon with http interface at [0.0.0.0]:3353
[CET Mar 18 03:12:03] info : Monit start delay set to 180s
[CET Mar 18 03:15:03] info : 'xxxxx.localhost' Monit 5.25.1 started
[CET Mar 18 03:15:03] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:16:08] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:17:08] error : 'apache' error -- unknown resource ID: [5]
[CET Mar 18 03:18:08] error : 'apache' error -- unknown resource ID: [5]
The configuration file /etc/monit.conf is like this :
###############################################################################
## Monit control file
###############################################################################
###############################################################################
## Global section
###############################################################################
##
## Start Monit in the background (run as a daemon):
# check services at 2-minute intervals
# with start delay 240 # optional: delay the first check by 4-minutes (by
# # default Monit check immediately after Monit start)
set daemon 60
with start delay 180
### Set the location of the Monit id file which stores the unique id for the
### Monit instance. The id is generated and stored on first Monit start. By
### default the file is placed in $HOME/.monit.id.
#
set idfile /var/.monit.id
## Set the list of mail servers for alert delivery. Multiple servers may be
## specified using a comma separator. By default Monit uses port 25 - it is
## possible to override this with the PORT option.
#
# set mailserver mail.bar.baz, # primary mailserver
# backup.bar.baz port 10025, # backup mailserver on port 10025
# localhost # fallback relay
set mailserver localhost
## By default Monit will drop alert events if no mail servers are available.
## If you want to keep the alerts for later delivery retry, you can use the
## EVENTQUEUE statement. The base directory where undelivered alerts will be
## stored is specified by the BASEDIR option. You can limit the maximal queue
## size using the SLOTS option (if omitted, the queue is limited by space
## available in the back end filesystem).
#
set eventqueue
basedir /var/monit # set the base directory where events will be stored
slots 100 # optionally limit the queue size
## Send status and events to M/Monit (for more informations about M/Monit
## see http://mmonit.com/).
#
# set mmonit http://monit:monit#192.168.1.10:8080/collector
#
#
## Monit by default uses the following alert mail format:
##
##
## You can override this message format or parts of it, such as subject
## or sender using the MAIL-FORMAT statement. Macros such as $DATE, etc.
## are expanded at runtime. For example, to override the sender, use:
#
# set mail-format { from: monit#foo.bar }
#
#
## You can set alert recipients whom will receive alerts if/when a
## service defined in this file has errors. Alerts may be restricted on
## events by using a filter as in the second example below.
#
set alert fake#mail.com not on { instance }
# receive all alerts
# set alert manager#foo.bar only on { timeout } # receive just service-
# # timeout alert
#
mail-format {
from: xxxxxxx#monit.localhost
subject: $SERVICE => $EVENT
message:
DESCRIPTION : $DESCRIPTION
ACTION : $ACTION
DATE : $DATE
HOST : $HOST
Sorry for the spam.
Monit
}
## Monit has an embedded web server which can be used to view status of
## services monitored and manage services from a web interface. See the
## Monit Wiki if you want to enable SSL for the web server.
#
set httpd port 3353 and
use address 0.0.0.0
allow yyyyy:zzzz
###############################################################################
## SeSTART rvices
###############################################################################
##
## Check general system resources such as load average, cpu and memory
## usage. Each test specifies a resource, conditions and the action to be
## performed should a test fail.
#
check system xxxxxx.localhost
if loadavg (1min) > 8 for 5 cycles then alert
if loadavg (5min) > 4 for 5 cycles then alert
if memory usage > 75% for 5 cycles then alert
if cpu usage (user) > 70% for 5 cycles then alert
if cpu usage (system) > 50% for 5 cycles then alert
if cpu usage (wait) > 50% for 5 cycles then alert
check process apache with pidfile /var/run/httpd/httpd.pid
group www
start program = "/etc/init.d/httpd start" with timeout 60 seconds
stop program = "/etc/init.d/httpd stop"
if failed host localhost port 80 then restart
if cpu > 60% for 2 cycles then alert
if cpu > 80% for 5 cycles then restart
if loadavg(5min) greater than 10 for 8 cycles then restart
if 3 restarts within 5 cycles then timeout
###############################################################################
## Includes
###############################################################################
##
## It is possible to include additional configuration parts from other files or
## directories.
#
# include /etc/monit.d/*
#
#
# Include all files from /etc/monit.d/
include /etc/monit.d/*
On ui monit, everything is ok. and the monitoring is 100% useful. We can stop, restart the service like we want.
So I don't understand the sentence 'error : 'apache' error -- unknown resource ID: [5]' we found on the log of monit.
Anyone has an idea about it ?
Thanks for your help.
I had the same problem..
mmonit said that loadavg is for "check system" only. it used to work for apache but not anymore..
"The loadavg statement can be used in "check system" context only (load average is system property, not process'). Please remove the following statement and reload monit"
so disable this line by adding # on the first of:
# if loadavg(5min) greater than 10 for 8 cycles then restart
then restart monit
service monit restart
You will no longer receive the appache error.