Debezium Server with redis - redis

I'm new with Debezium and I want that My Debezium Server streams change events from Postogres to Redis (stream).
in the log it seems to read the changes in the database but nothing is published in redis
My config application.properties
debezium.sink.type=redis
debezium.sink.redis.address=0.0.0.0:6379
debezium.sink.redis.batch.size=10
debezium.source.connector.class=io.debezium.connector.postgresql.PostgresConnector
debezium.source.offset.storage.file.filename=data/offsets.dat
debezium.source.offset.flush.interval.ms=0
debezium.source.database.hostname=0.0.0.0
debezium.source.database.port=5432
debezium.source.database.user=postgres
debezium.source.database.password=postgres
debezium.source.database.dbname=postgres
debezium.source.database.server.name=postgres
debezium.source.table.include.list=inventory
quarkus.log.console.json=true
Debezium server log:
Starting PostgresConnectorTask with configuration:
connector.class = io.debezium.connector.postgresql.PostgresConnector
debezium.sink.redis.batch.size = 10
database.user = postgres
database.dbname = postgres
debezium.sink.type = redis
offset.storage = org.apache.kafka.connect.storage.FileOffsetBackingStore
debezium.sink.redis.address = 0.0.0.0:6379
database.server.name = postgres
offset.flush.timeout.ms = 5000
database.port = 5432
offset.flush.interval.ms = 0
internal.key.converter = org.apache.kafka.connect.json.JsonConverter
offset.storage.file.filename = data/offsets.dat
database.hostname = 0.0.0.0
database.password = ********
name = redis
internal.value.converter = org.apache.kafka.connect.json.JsonConverter
table.include.list = inventory
value.converter = org.apache.kafka.connect.json.JsonConverter
key.converter = org.apache.kafka.connect.json.JsonConverter
2022-08-15 12:37:06,909 INFO [io.quarkus] (main) debezium-server-dist 1.9.5.Final on 0.0.0:8080
So my question: do I have to mention the name of the redis database and is my configuration correct?
thank's

Related

telegraf disk-input does not write to output in phusion/baseimage

Just used telegraf and influxdb with some other plugins.
But the output of [[inputs.disk]] is not sent to the influx-database, although the telegraf-cli prints the series:
root#99a3dda91f0e:/# telegraf --config /etc/telegraf/telegraf.conf --test
* Plugin: inputs.disk, Collection 1
> disk,path=/,device=none,fstype=aufs,host=99a3dda91f0e,dockerhost=0zizhqemxr3fmhr949qqg94ly free=92858503168i,used=5304786944i,used_percent=5.404043546164225,inodes_total=6422528i,inodes_free=6192593i,inodes_used=229935i,total=103441399808i 1504273867000000000
> disk,path=/usr/share/zoneinfo/Etc/UTC,device=sda1,fstype=ext4,host=99a3dda91f0e,dockerhost=0zizhqemxr3fmhr949qqg94ly used=5304786944i,used_percent=5.404043546164225,inodes_total=6422528i,inodes_free=6192593i,inodes_used=229935i,total=103441399808i,free=92858503168i 1504273867000000000
> disk,path=/etc/resolv.conf,device=sda1,fstype=ext4,host=99a3dda91f0e,dockerhost=0zizhqemxr3fmhr949qqg94ly inodes_free=253014i,inodes_used=729i,total=207867904i,free=191041536i,used=16826368i,used_percent=8.094740783069618,inodes_total=253743i 1504273867000000000
> disk,path=/etc/hostname,device=sda1,fstype=ext4,host=99a3dda91f0e,dockerhost=0zizhqemxr3fmhr949qqg94ly total=103441399808i,free=92858503168i,used=5304786944i,used_percent=5.404043546164225,inodes_total=6422528i,inodes_free=6192593i,inodes_used=229935i 1504273867000000000
> disk,dockerhost=0zizhqemxr3fmhr949qqg94ly,path=/etc/hosts,device=sda1,fstype=ext4,host=99a3dda91f0e total=103441399808i,free=92858503168i,used=5304786944i,used_percent=5.404043546164225,inodes_total=6422528i,inodes_free=6192593i,inodes_used=229935i 1504273867000000000
* Plugin: inputs.kernel, Collection 1
> kernel,host=99a3dda91f0e,dockerhost=0zizhqemxr3fmhr949qqg94ly interrupts=38110293i,context_switches=66702050i,boot_time=1504190750i,processes_forked=227872i 1504273867000000000
Within influx:
> use monitoring
Using database monitoring
> show measurements
name: measurements
name
----
kernel
>
the telegraf config:
[global_tags]
host = "$HOSTNAME"
dockerhost = "$DOCKERHOSTNAME"
# Configuration for telegraf agent
[agent]
interval = "10s"
round_interval = true
metric_batch_size = 1000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = ""
debug = false
quiet = true
logfile = ""
hostname = ""
omit_hostname = false
[[outputs.influxdb]]
urls = ["http://influxdb:8086"] # required
database = "$INFLUX_DATABASE"
retention_policy = ""
write_consistency = "any"
timeout = "5s"
[[inputs.disk]]
## Setting mountpoints will restrict the stats to the specified mountpoints.
# mount_points = ["/"]
## Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually
## present on /run, /var/run, /dev/shm or /dev).
ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.kernel]]
Telegraf v1.3.5 (git: release-1.3 7192e68b2423997177692834f53cdf171aee1a88)
InfluxDB v1.3.2 (git: 1.3 742b9cb3d74ff1be4aff45d69ee7c9ba66c02565)
//edited: of course:
echo $INFLUX_DATABASE
monitoring
If I add other inputs again, like [[inputs.diskio]], they appear in the database immediately.
seems like there is an issue while getting the when running telegraf as a runsv-script in phusion/baseimage

kdb5_util dump gives Server error

I have been trying to dump my Kerberos database (ldap backend) using kdb5_util dump (filename), but I get:
kdb5_util load_dump version 6
kdb5_util: error performing Kerberos version 5 release 1.8 dump (Server error)
policy default 0 0 1 1 1 0 0 0 0
Kerberos KDC and Kadmin log has nothing, ldap.log gives
May 31 12:40:17 kdc slapd[28020]: connection_input: conn=1091 deferring operation: binding
Everything else works fine, creating, deleting, authentication of principals, no problem. Just dumping the DB fails. As far as I understand, the backend should not have any influence on the dump.
Any ideas how I can debug or fix this? What am I missing?
/etc/krb5.conf
[libdefaults]
default_realm = REALM.EXAMPLE.COM
kdc_timesync = 1
ccache_type = 4
forwardable = true
proxiable = true
[realms]
REALM.EXAMPLE.COM = {
kdc = kdc.realm.example.com
admin_server = kdc.realm.example.com
kpasswd_server = kdc.realm.example.com
}
[domain_realm]
.realm.example.com = REALM.EXAMPLE.COM
/etc/krb5kdc/kdc.conf
[realms]
REALM.EXAMPLE.COM = {
default_domain = realm.example.com
database_module = ldapconf
acl_file = /etc/krb5kdc/kadm5.acl
key_stash_file = /etc/krb5kdc/.master
max_life = 10h 0m 0s
max_renewable_life = 7d 0h 0m 0s
master_key_type = aes256-cts
supported_enctypes = aes256-cts-hmac-sha1-96:normal
#aes128-cts-hmac-sha1-96:normal arcfour-hmac:normal
default_principal_flags = +preauth
pkinit_identity = FILE:/etc/krb5kdc/kdc-cert.pem,/etc/krb5kdc/.kdc-key.pem
pkinit_anchors = FILE:/etc/krb5kdc/ca-cert.pem
dict_file = /root/bad_passwords.dict
}
[dbmodules]
ldapconf = {
db_library = kldap
ldap_kerberos_container_dn = "cn=kerberos,dc=realm,dc=example,dc=com"
ldap_kdc_dn = "cn=kerberos-kdc,dc=realm,dc=example,dc=com"
ldap_kadmind_dn = "cn=kerberos-admin,dc=realm,dc=example,dc=com"
ldap_servers = ldapi:///
ldap_service_password_file = /etc/krb5kdc/.service
}
[logging]
kdc = FILE:/var/log/kerberos/kdc.log
admin_server = FILE:/var/log/kerberos/kadmin.log
default = FILE:/var/log/kerberos/kerberos.log
Found the Problem after debugging at last:
The LDAP backend has a hard Size limit of 500 for search requests. With 501 Users that bit me in the backside!
Fix:
#
# remove sizelimit for ldap search
#
# apply with ldapmodify -Y EXTERNAL -H ldapi:/// -f sizelimit.ldif
#
dn: olcDatabase={1}hdb,cn=config
changetype: modify
add: olcLimits
olcLimits: dn.exact="cn=kerberos-admin,dc=realm,dc=example,dc=com" size=unlimited
Apply, restart slapd, and dump happily away

NACK/0x00000061/Invalid Scheduled Delivery Time error in Kannel

I have been trying to configure an SMS gateway service using Kannel and sqlbox. My system is successfully connected to the Airtel SMSC. But whenever I try to send SMS (Inserting data in send_sms table of course), I get this weird response from the SMSC
NACK/0x00000061/Invalid Scheduled Delivery Time
But I have nowhere mentioned about the scheduled delivery time.
Here is the log in SMSC side
and here is my kannel configuration
#CORE
group = core
admin-port = 13000
smsbox-port = 13001
admin-password = rasello
status-password = rasello
admin-allow-ip = "*.*.*.*"
wdp-interface-name = "*"
log-file = "/var/log/kannel/bearerbox.log"
#store-file = "/var/log/kannel/kannel.store"
log-level = 0
#box-deny-ip = "*.*.*.*"
box-allow-ip = "*.*.*.*"
dlr-storage=mysql
#SMSBOX SETUP
group = smsbox
bearerbox-host = localhost
sendsms-port = 13013
bearerbox-port = 13001
log-file = "/var/log/kannel/smsbox.log"
log-level = 0
# SEND-SMS USERS
group = sendsms-user
username = username
password = password
default-smsc = rasello
#mysql connection
group = mysql-connection
id = sqlbox-db
host = localhost
port = 3306
username = root
password = N3pal#312
database = kannel
max-connections = 10
# DLR SETUP
#mysql connection
group = mysql-connection
id = mydlr
host = localhost
username = root
password = N3pal#312
database = kannel
max-connections = 10
group = dlr-db
id = mydlr
table=dlr
field-smsc=smsc
field-timestamp=ts
field-destination=destination
field-source=source
field-service=service
field-url=url
field-mask=mask
field-status=status
field-boxc-id=boxc
# SMSC SMPP
group = smsc
smsc-id = rasello
smsc = smpp
host = ip
port = port
transceiver-mode = false
smsc-username = username
smsc-password = password
system-type = smpp
interface-version = 34
address-range = ""
#SMS SERVICE GET-URL
group = sms-service
keyword = default
send-sender = true
get-url = "http://localhost/receivesms?phone=%p&text=%a"
Please help resolving this issue
you have to contact ur SMPP provider for this coz they are rejecting your SMS with this NACK

celery worker not publishing message to the rabbitmq?

I have a setup where celery_result_backend has been configured to 'amqp'. I can see my tasks getting executed by the worker in logs. But
It is creating the queue with task id but its status is expired.I am not getting the result (result = AsyncResult(taskid); result.get() hangs). I tried all the backed supported:
1)Mysql: It is not putting data to the celery created tables
2) Redis: It is not putting data to the db
I two centos system.
1) I am calling the delay method to send the task to proper rabbitmq. And the worker is listening to the queue, from there it will pick the task and process(I can see task in the queue and getting executed by the worker in machine 2 But the result is not being put into the backend.
).Here I am doing the result.get() It hangs.
2) The worker is running on it to execute the task.It executes the task but I think not able to put the rersult
Settings:
RABBITMQ_BROKER_HOST = '10.213.166.133'
RABBITMQ_BROKER_PORT = dqms_settings.RABBITMQ_BROKER_PORT
RABBITMQ_BROKER_VHOST = dqms_settings.RABBITMQ_BROKER_VHOST
RABBITMQ_BROKER_USERNAME = dqms_settings.RABBITMQ_BROKER_USERNAME
RABBITMQ_BROKER_PASSWORD = dqms_settings.RABBITMQ_BROKER_PASSWORD
BROKER_URL = 'amqp://%s:%s#%s:%s/%s' % (RABBITMQ_BROKER_USERNAME,
RABBITMQ_BROKER_PASSWORD,
RABBITMQ_BROKER_HOST,
RABBITMQ_BROKER_PORT,
RABBITMQ_BROKER_VHOST)
#CELERY_TASK_RESULT_EXPIRES = 18000
#CELERY_IGNORE_RESULT = True
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
#CELERY_RESULT_BACKEND = 'db+mysql://svcacct-dqms:s3cretP#ssw0rd#10.213.166.202:3306/dqms'
#CELERY_RESULT_BACKEND = 'amqp'
#CELERY_AMQP_TASK_RESULT_EXPIRES = 1000
#CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = TIME_ZONE
CELERYD_PREFETCH_MULTIPLIER = dqms_settings.CELERYD_PREFETCH_MULTIPLIER
CELERY_DEFAULT_QUEUE = dqms_settings.CELERY_DEFAULT_QUEUE
CELERY_DEFAULT_EXCHANGE_TYPE = dqms_settings.CELERY_DEFAULT_EXCHANGE_TYPE
CELERY_DEFAULT_ROUTING_KEY = dqms_settings.CELERY_DEFAULT_ROUTING_KEY
CELERY_QUEUES = dqms_settings.CELERY_QUEUES
CELERY_ROUTES = dqms_settings.CELERY_ROUTES
CELERYD_HIJACK_ROOT_LOGGER = dqms_settings.CELERYD_HIJACK_ROOT_LOGGER
CELERY_ACKS_LATE = dqms_settings.CELERY_ACKS_LATE
CELERY_RESULT_BACKEND = 'redis://:s3cretP#ssw0rd#10.213.166.204:6379/5' #'djcelery.backends.database.DatabaseBackend'
#CELERY_REDIS_MAX_CONNECTIONS = 6
#CELERY_ALWAYS_EAGER = False
Can some one help why it is not putting the result in the queue?
This is a issue which is happening quite common now.
setting CELERY_ALWAYS_EAGER to TRUE will do the work
However this is not the best solution in production scenario.

Can apache flume hdfs sink accept dynamic path to write?

I am new to apache flume.
I am trying to see how I can get a json (as http source), parse it and store it to a dynamic path on hdfs according to the content.
For example:
if the json is:
[{
"field1" : "value1",
"field2" : "value2"
}]
then the hdfs path will be:
/some-default-root-path/value1/value2/some-value-name-file
Is there such configuration of flume that enables me to do that?
Here is my current configuration (accepts a json via http, and stores it in a path according to timestamp):
#flume.conf: http source, hdfs sink
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = org.apache.flume.source.http.HTTPSource
a1.sources.r1.port = 9000
#a1.sources.r1.handler = org.apache.flume.http.JSONHandler
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = /user/uri/events/%y-%m-%d/%H%M/%S
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
Thanks!
The solution was in the flume documentation for the hdfs sink:
Here is the revised configuration:
#flume.conf: http source, hdfs sink
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = org.apache.flume.source.http.HTTPSource
a1.sources.r1.port = 9000
#a1.sources.r1.handler = org.apache.flume.http.JSONHandler
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = /user/uri/events/%{field1}
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
and the curl:
curl -X POST -d '[{ "headers" : { "timestamp" : "434324343", "host" :"random_host.example.com", "field1" : "val1" }, "body" : "random_body" }]' localhost:9000