I want to sink JSON data into Apache Phoenix with Apache Flume, followed an online guide http://kalyanbigdatatraining.blogspot.com/2016/10/how-to-stream-json-data-into-phoenix.html, but met the following error. How to resolve it? Many thanks!
My environment list as:
hadoop-2.7.3
hbase-1.3.1
phoenix-4.12.0-HBase-1.3-bin
flume-1.7.0
In flume, I added phoenix sink related jars in $FLUME_HOME/plugins.d/phoenix-sink/lib
commons-io-2.4.jar
twill-api-0.8.0.jar
twill-discovery-api-0.8.0.jar
json-path-2.2.0.jar
twill-common-0.8.0.jar
twill-discovery-core-0.8.0.jar
phoenix-flume-4.12.0-HBase-1.3.jar
twill-core-0.8.0.jar
twill-zookeeper-0.8.0.jar
2017-11-11 14:49:54,786 (lifecycleSupervisor-1-1) [DEBUG - org.apache.phoenix.jdbc.PhoenixDriver$2.onRemoval(PhoenixDriver.java:159)] Expiring localhost:2181:/hbase because of EXPLICIT
2017-11-11 14:49:54,787 (lifecycleSupervisor-1-1) [INFO - org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.closeZooKeeperWatcher(ConnectionManager.java:1712)] Closing zookeeper sessionid=0x15fa8952cea00a6
2017-11-11 14:49:54,787 (lifecycleSupervisor-1-1) [DEBUG - org.apache.zookeeper.ZooKeeper.close(ZooKeeper.java:673)] Closing session: 0x15fa8952cea00a6
2017-11-11 14:49:54,787 (lifecycleSupervisor-1-1) [DEBUG - org.apache.zookeeper.ClientCnxn.close(ClientCnxn.java:1306)] Closing client for session: 0x15fa8952cea00a6
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1-SendThread(localhost:2181)) [DEBUG - org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:818)] Reading reply sessionid:0x15fa8952cea00a6, packet:: clientPath:null serverPath:null finished:false header:: 3,-11 replyHeader:: 3,2620,0 request:: null response:: null
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1) [DEBUG - org.apache.zookeeper.ClientCnxn.disconnect(ClientCnxn.java:1290)] Disconnecting client for session: 0x15fa8952cea00a6
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1-SendThread(localhost:2181)) [DEBUG - org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1086)] An exception was thrown while closing send thread for session 0x15fa8952cea00a6 : Unable to read additional data from server sessionid 0x15fa8952cea00a6, likely server has closed socket
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1) [INFO - org.apache.zookeeper.ZooKeeper.close(ZooKeeper.java:684)] Session: 0x15fa8952cea00a6 closed
2017-11-11 14:49:54,789 (lifecycleSupervisor-1-1-EventThread) [INFO - org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:512)] EventThread shut down
2017-11-11 14:49:54,790 (lifecycleSupervisor-1-1) [ERROR - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)] Unable to start SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#2d2052a0 counterGroup:{ name:null counters:{} } } - Exception follows.
java.lang.NoSuchMethodError:
org.apache.twill.zookeeper.ZKClientService.startAndWait()Lcom/google/common/util/concurrent/Service$State;
at org.apache.phoenix.transaction.TephraTransactionContext.setTransactionClient(TephraTransactionContext.java:147)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.initTxServiceClient(ConnectionQueryServicesImpl.java:401)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:415)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.access$500(ConnectionQueryServicesImpl.java:257)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2384)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2360)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2360)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.phoenix.flume.serializer.BaseEventSerializer.initialize(BaseEventSerializer.java:140)
at org.apache.phoenix.flume.sink.PhoenixSink.start(PhoenixSink.java:119)
at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:45)
at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:249)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2017-11-11 14:49:54,792 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:149)] Component type: SINK, name: Phoenix Sink__1 stopped
2017-11-11 14:49:54,792 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:155)] Shutdown Metric for type: SINK, name: Phoenix Sink__1. sink.start.time == 1510382993516
Here is my flume-agent.properties
agent.sources = exec
agent.channels = mem-channel
agent.sinks = phoenix-sink
agent.sources.exec.type = exec
agent.sources.exec.command = tail -F /Users/chenshuai1/tmp/users.json
agent.sources.exec.channels = mem-channel
agent.sinks.phoenix-sink.type = org.apache.phoenix.flume.sink.PhoenixSink
agent.sinks.phoenix-sink.batchSize = 10
agent.sinks.phoenix-sink.zookeeperQuorum = localhost
agent.sinks.phoenix-sink.table = users2
agent.sinks.phoenix-sink.ddl = CREATE TABLE IF NOT EXISTS users2 (userid BIGINT NOT NULL, username VARCHAR, password VARCHAR, email VARCHAR, country VARCHAR, state VARCHAR, city VARCHAR, dt VARCHAR NOT NULL CONSTRAINT PK PRIMARY KEY (userid, dt))
agent.sinks.phoenix-sink.serializer = json
agent.sinks.phoenix-sink.serializer.columnsMapping = {"userid":"userid", "username":"username", "password":"password", "email":"email", "country":"country", "state":"state", "city":"city", "dt":"dt"}
agent.sinks.phoenix-sink.serializer.partialSchema = true
agent.sinks.phoenix-sink.serializer.columns = userid,username,password,email,country,state,city,dt
agent.sinks.phoenix-sink.channel = mem-channel
agent.channels.mem-channel.type = memory
agent.channels.mem-channel.capacity = 1000
agent.channels.mem-channel.transactionCapacity = 100
Related
I am trying to send a request with one header(authorization) but when i check the logs i am seeing same header is added twice to the request which is causing my tests to fail.
Below is the post method which i am using
public static Response doPost(String basePath, Object payload, String header) {
return reqSpec.given().header("X-HSBC-E2E-Trust-Token", header).config(RestAssuredConfig.config())
.body(payload.toString()).log().all()
.post(basePath).then().extract().response();
}
Log:
Request URI: https:Url
Proxy: <none>
Request params: <none>
Query params: <none>
Form params: <none>
Path params: <none>
Headers:E2E-Trust-Token=0JDX09VRIMl9TRVJWRVJfREVWIn0.eyJzaXQiOiJhZDp1c3I6ZW1wbG95ZWVJZCIsInN1YiI6IkdCLVNWQy1GV0tUQVBJRlAiLCJhbHQiOlt7InNpdCI6ImVtYWlsIiwic3ViIjoiR0ItU1ZDLUZXS1RBUElGUEBOb3RSZWNlaXZpbmdNYWlmhzYmMuY29tIn0seyJzaXQiOiJuYW1lIiwic3ViIjoiR0ItU1ZDLUZXS1RBUElGUCJ9LHsic2l0IjoibG9naW5JZCIsInN1YiI6IkdCLVNWQy1GV0tUQVBJRlAifV0sImdycCI6WyJDTj1JbmZvZGlyLUFQSUF1dG9GV0stUHJvZFN W5kYXJkLE9VPUlUSURBUEksT1U9QXBwbGljYXRpb25zLE9VPUdyb3VwcyxEQz1JbmZvRGlyLERDPVByb2QsREM9SFNCQyIsIkNOPUluZm9kaXItQVBJQXV0b0ZXSy1Qcm9kQWRtaW4sT1U9SVRJREFQSSxPVT1BcHBsaWNhdGlvbnM1U9R3JvdXBzLERDPUluZm9EaXIsREM9UHJvZCxEQz1IU0JDIl0sInNjb3BlIjoiSVRJREFQSSIsImp0aSI6IjBlMGQ0YzM5LThlMjQtNDI5MC1iZGExLTE0MzllOTRlMzcxYyIsImlzcyI6ImdibDIwMTA5MDk2LmhjLmNsb3VkLnVmhzYmMiLCJpYXQiOjE2NTc1MzE2NTEsImV4cCI6MTY1NzUzMjI1MSwiYXVkIjoiR0ItU1ZDLUFQSUZXREVWQEhCRVUiLCJ1c2VyX25hbWUiOiJHQi1TVkMtRldLVEFQSUZQIn0.HgxR7j7fbl5JRQxTFX0Z7OJLk_11Vc4fUFPEZ9E
Accept=*/*
E2E-Trust-Token=0JDX09BVVRIMl9TRVJWRVJfREVWIn0.eyJzaXQiOiJhZDp1c3I6ZW1wbG95ZWVJZCIsInN1YiI6IkdCLVNWQy1GV0tUQVBJRlAiLCJhbHQiOlt7InNpdCI6ImVtYWlsIiwic3ViIjoiR0ItU1ZDLUZXS1RBUElGUEBOb3RSZWNlaXZmdNYWlsLmhzYmMuY29tIn0seyJzaXQiOiJuYW1lIiwic3ViIjoiR0ItU1ZDLUZXS1RBUElGUCJ9LHsic2l0IjoibG9naW5JZCIsInN1YiI6IkdCLVNWQy1GV0tUQVBJRlAifV0sImdycCI6WyJDTj1JbmZvZGlyLUFQSUF1dG9GV0sHJvZFN0YW5kYXJkLE9VPUlUSURBUEksT1U9QXBwbGljYXRpb25zLE9VPUdyb3VwcyxEQz1JbmZvRGlyLERDPVByb2QsREM9SFNCQyIsIkNOPUluZm9kaXItQVBJQXV0b0ZXSy1Qcm9kQWRtaW4sT1U9SVRJREFQSSxPVT1BcHBsaWN
3VkLnVrLmhzYmMiLCJpYXQiOjE2NTc1MzE2NTEsImV4cCI6MTY1NzUzMjI1MSwiYXVkIjoiR0ItU1ZDLUFQSUZXREVWQEhCRVUiLCJ1c2VyX25hbWUiOiJHQi1TVkMtRldLVEFQSUZQIn0.HgxR7j7fbl5JRQxTFX0Z7OJLk_11Vc4FPEZ9Ec2xVXmUxeE6M3f8wBe4HOrQDJw_gWm_Kvf_hepsvAu0S0ACRQ3P_4i5yf2z0FXtBoUd3v9AvZcB299PZGJhNZ4HxUEP5SdzjQWOuZgdHAFg6GJdgsYBLTuCXQqy-kdqgPczgx7bgKTBfqrWH2I4qH1ZSE2OlksvGPj0Ia1Ts
Content-Type=application/json; charset=UTF-8
Cookies: <none>
Multiparts: <none>
Body:
{
"name": "TestRole11072733",
"requiresApproval": true,
"signatures": [
1091
]
}
10:27:36.574 [main] DEBUG org.apache.http.impl.conn.BasicClientConnectionManager - Get connection for route {s}->https:URL
10:27:36.574 [main] DEBUG org.apache.http.impl.conn.DefaultClientConnectionOperator - Connecting to Endpoint
10:27:37.001 [main] DEBUG org.apache.http.client.protocol.RequestAddCookies - CookieSpec selected: ignoreCookies
10:27:37.001 [main] DEBUG org.apache.http.client.protocol.RequestAuthCache - Auth cache not set in the context
10:27:37.001 [main] DEBUG org.apache.http.client.protocol.RequestProxyAuthentication - Proxy auth state: UNCHALLENGED
10:27:37.001 [main] DEBUG org.apache.http.impl.client.DefaultHttpClient - Attempt 1 to execute request
10:27:37.001 [main] DEBUG org.apache.http.impl.conn.DefaultClientConnection - Sending request: POST Endpoint
I am uploading the kafka environment with SSL, until then, without problems...
it goes up normally, but when I create a MySQL connector,
the producer does not receive the config sent by the docker environment!
Any Suggestions?
---
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: ${ZK_HOST}
hostname: ${ZK_HOST}
ports:
- "${ZK_PORT}:${ZK_PORT}"
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: ${ZK_PORT}
ZOOKEEPER_CLIENT_SECURE: 'true'
ZOOKEEPER_SSL_KEYSTORE_LOCATION: /etc/zookeeper/secrets/kafka.keystore.jks
ZOOKEEPER_SSL_KEYSTORE_PASSWORD: ${SSL_SECRET}
ZOOKEEPER_SSL_TRUSTSTORE_LOCATION: /etc/zookeeper/secrets/kafka.truststore.jks
ZOOKEEPER_SSL_TRUSTSTORE_PASSWORD: ${SSL_SECRET}
volumes:
- ./secrets:/etc/zookeeper/secrets
kafka-ssl:
image: confluentinc/cp-kafka:latest
container_name: ${BROKER_HOST}
hostname: ${BROKER_HOST}
ports:
- "${BROKER_PORT}:${BROKER_PORT}"
depends_on:
- ${ZK_HOST}
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: '${ZK_HOST}:${ZK_PORT}'
KAFKA_ADVERTISED_LISTENERS: 'SSL://${BROKER_HOST}:${BROKER_PORT}'
KAFKA_SSL_KEYSTORE_FILENAME: kafka.keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS: cert_creds
KAFKA_SSL_KEY_CREDENTIALS: cert_creds
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: cert_creds
KAFKA_SSL_CLIENT_AUTH: 'required'
KAFKA_SECURITY_PROTOCOL: SSL
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SSL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- ./secrets:/etc/kafka/secrets
schema-registry:
image: confluentinc/cp-schema-registry
container_name: ${SR_HOST}
hostname: ${SR_HOST}
depends_on:
- ${ZK_HOST}
- ${BROKER_HOST}
ports:
- "${SR_PORT}:${SR_PORT}"
environment:
SCHEMA_REGISTRY_HOST_NAME: ${SR_HOST}
SCHEMA_REGISTRY_LISTENERS: 'https://0.0.0.0:${SR_PORT}'
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: '${ZK_HOST}:${ZK_PORT}'
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'SSL://${BROKER_HOST}:${BROKER_PORT}'
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: SSL
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_LOCATION: /etc/schema-registry/secrets/kafka.keystore.jks
SCHEMA_REGISTRY_SSL_KEYSTORE_LOCATION: /etc/schema-registry/secrets/kafka.keystore.jks
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_PASSWORD: ${SSL_SECRET}
SCHEMA_REGISTRY_SSL_KEYSTORE_PASSWORD: ${SSL_SECRET}
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEY_PASSWORD: ${SSL_SECRET}
SCHEMA_REGISTRY_SSL_KEY_PASSWORD: ${SSL_SECRET}
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_LOCATION: /etc/schema-registry/secrets/kafka.truststore.jks
SCHEMA_REGISTRY_SSL_TRUSTSTORE_LOCATION: /etc/schema-registry/secrets/kafka.truststore.jks
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_PASSWORD: ${SSL_SECRET}
SCHEMA_REGISTRY_SSL_TRUSTSTORE_PASSWORD: ${SSL_SECRET}
SCHEMA_REGISTRY_SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: https
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas
SCHEMA_REGISTRY_SSL_CLIENT_AUTH: 'true'
volumes:
- ./secrets:/etc/schema-registry/secrets
connect:
build:
context: .
dockerfile: Dockerfile
image: chethanuk/kafka-connect:5.3.1
hostname: ${SR_CON}
container_name: ${SR_CON}
depends_on:
- ${ZK_HOST}
- ${BROKER_HOST}
- ${SR_HOST}
ports:
- "${SR_CON_PORT}:${SR_CON_PORT}"
environment:
CONNECT_LISTENERS: 'https://0.0.0.0:${SR_CON_PORT}'
CONNECT_REST_ACCESS_CONTROL_ALLOW_METHODS: 'GET,POST,PUT,DELETE,OPTIONS'
CONNECT_REST_ACCESS_CONTROL_ALLOW_ORIGIN: '*'
CONNECT_BOOTSTRAP_SERVERS: 'SSL://${BROKER_HOST}:${BROKER_PORT}'
CONNECT_REST_ADVERTISED_HOST_NAME: ${SR_CON}
CONNECT_REST_PORT: ${SR_CON_PORT}
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: https://${SR_HOST}:${SR_PORT}
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: '${ZK_HOST}:${ZK_PORT}'
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.2.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
CONNECT_SSL_CLIENT_AUTH: 'true'
CONNECT_SECURITY_PROTOCOL: SSL
CONNECT_SSL_KEY_PASSWORD: ${SSL_SECRET}
CONNECT_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.truststore.jks
CONNECT_SSL_TRUSTSTORE_PASSWORD: ${SSL_SECRET}
CONNECT_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.keystore.jks
CONNECT_SSL_KEYSTORE_PASSWORD: ${SSL_SECRET}
CONNECT_PRODUCER_SECURITY_PROTOCOL: SSL
CONNECT_PRODUCER_BOOTSTRAP_SERVERS: 'SSL://${BROKER_HOST}:${BROKER_PORT}'
CONNECT_PRODUCER_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.truststore.jks
CONNECT_PRODUCER_SSL_TRUSTSTORE_PASSWORD: ${SSL_SECRET}
CONNECT_CONSUMER_SECURITY_PROTOCOL: SSL
CONNECT_CONSUMER_BOOTSTRAP_SERVERS: 'SSL://${BROKER_HOST}:${BROKER_PORT}'
CONNECT_CONSUMER_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.truststore.jks
CONNECT_CONSUMER_SSL_TRUSTSTORE_PASSWORD: ${SSL_SECRET}
volumes:
- ./secrets:/etc/kafka/secrets
error:
[2021-05-21 05:13:50,157] INFO Requested thread factory for connector MySqlConnector, id = myql named = db-history-config-check (io.debezium.util.Threads)
[2021-05-21 05:13:50,160] INFO ProducerConfig values:
acks = 1
batch.size = 32768
bootstrap.servers = [broker:29092]
buffer.memory = 1048576
client.dns.lookup = default
client.id = myql-dbhistory
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 10000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 1
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
(org.apache.kafka.clients.producer.ProducerConfig)
[2021-05-21 05:13:50,162] WARN Couldn't resolve server broker:29092 from bootstrap.servers as DNS resolution failed for broker (org.apache.kafka.clients.ClientUtils)
[2021-05-21 05:13:50,162] INFO [Producer clientId=myql-dbhistory] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2021-05-21 05:13:50,162] INFO WorkerSourceTask{id=zabbix-hosts-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
[2021-05-21 05:13:50,162] INFO WorkerSourceTask{id=zabbix-hosts-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)
[2021-05-21 05:13:50,163] ERROR WorkerSourceTask{id=zabbix-hosts-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:432)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:298)
at io.debezium.relational.history.KafkaDatabaseHistory.start(KafkaDatabaseHistory.java:235)
at io.debezium.relational.HistorizedRelationalDatabaseSchema.<init>(HistorizedRelationalDatabaseSchema.java:40)
at io.debezium.connector.mysql.MySqlDatabaseSchema.<init>(MySqlDatabaseSchema.java:90)
at io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:94)
at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:130)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:208)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:88)
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:47)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:407)
... 14 more
[2021-05-21 05:13:50,164] ERROR WorkerSourceTask{id=zabbix-hosts-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
vars
SSL_SECRET=
ZK_HOST=zookeeper
ZK_PORT=2181
BROKER_HOST=kafka-ssl
BROKER_PORT=9092
SR_HOST=schema-registry
SR_PORT=8181
SR_CON=connect
SR_CON_PORT=8083
HOST=localhost
build and image should not be used together. You've not shown your Dockerfile, so it's unclear what you're doing there, but it may explain why no variables are actually loaded
bootstrap.servers = [broker:29092]
Somewhere in your Connect configuration, you're not using kafka-ssl:9092 as the connection string
Notice that your key and value serializers are using String, not Avro settings, too... Interceptor list is empty, SSL settings don't seem to be applied, etc
To narrow it down, I don't think you need _PRODUCER_BOOTSTRAP_SERVERS or the consumer one.
You should exec into your container and look at the templated connect-distributed.properties file that was created
Note that the Debezium images come with the mysql connector classes, so maybe you don't need your own image?
I have Odoo 10 working since last 4 years. The Scheduled actions have been working fine until 7th May 2021.
Server Specs :
CPU - 4
Ram 16 GB
Ubuntu
The database name is : kwspl
In the server log, I find the following lines :
File "/opt/odoo/odoo-server/addons/bus/controllers/main.py", line 35, in poll
raise Exception("bus.Bus unavailable")
Exception: bus.Bus unavailable
2021-05-24 15:50:54,391 2376 INFO kwspl werkzeug: 127.0.0.1 - - [24/May/2021 15:50:54] "POST /longpolling/poll HTTP/1.1" 200 -
**2021-05-24 15:50:56,701 2381 DEBUG ? odoo.service.server: WorkerCron (2381) polling for jobs
2021-05-24 15:50:56,702 2381 DEBUG ? odoo.service.server: WorkerCron (2381) 'kwspl' time:0.001s mem: 233352k -> 233352k (diff: 0k)
2021-05-24 15:51:03,660 2382 DEBUG ? odoo.service.server: WorkerCron (2382) polling for jobs
2021-05-24 15:51:03,662 2382 DEBUG ? odoo.service.server: WorkerCron (2382) 'kwspl' time:0.002s mem: 233352k -> 233352k (diff: 0k)**
2021-05-24 15:51:04,530 2379 DEBUG kwspl odoo.modules.registry: Multiprocess signaling check: [Registry - 614 -> 614] [Cache - 57570 -> 57570]
2021-05-24 15:51:04,532 2379 ERROR kwspl odoo.http: Exception during JSON request handling.
Traceback (most recent call last):
File "/opt/odoo/odoo-server/odoo/http.py", line 640, in _handle_exception
return super(JsonRequest, self)._handle_exception(exception)
File "/opt/odoo/odoo-server/odoo/http.py", line 677, in dispatch
result = self._call_function(**self.params)
The odoo.conf is as below :
[options]
addons_path = /opt/odoo/odoo-server/addons,/opt/odoo/custom/addons
admin_passwd = ******
csv_internal_sep = ,
data_dir = /opt/odoo/.local/share/Odoo
#db_filter = kwspl
db_host = False
db_maxconn = 64
#db_name = False
db_name = 'kwspl'
db_password = False
db_port = False
db_template = template1
db_user = odoo
dbfilter = ^kwspl$
demo = {}
email_from = False
geoip_database = /usr/share/GeoIP/GeoLiteCity.dat
import_partial =
limit_memory_hard = 4684354560
limit_memory_soft = 4147483648
limit_request = 8192
limit_time_cpu = 420
limit_time_real = 180
limit_time_real_cron = -1
list_db = False
log_db = False
log_db_level = warning
#log_handler = :INFO
log_level = debug
logfile = /var/log/odoo/odoo-server.log
logrotate = False
longpolling_port = 8072
max_cron_threads = 2
osv_memory_age_limit = 1.0
osv_memory_count_limit = False
pg_path = None
pidfile = None
proxy_mode = True
reportgz = False
server_wide_modules = web,web_kanban
smtp_password = False
smtp_port = 25
smtp_server = localhost
smtp_ssl = False
smtp_user = False
syslog = False
test_commit = False
test_enable = False
test_file = False
test_report_directory = False
translate_modules = ['all']
unaccent = False
without_demo = False
workers = 4
xmlrpc = True
#xmlrpc_interface =
xmlrpc_port = 8069
If I change the following paramenters in odoo.conf
db_name = False
dbfilter = ^%d$
The following lines are seen in the log:
raise Exception("bus.Bus unavailable")
Exception: bus.Bus unavailable
2021-05-24 15:59:58,457 2574 INFO kwspl werkzeug: 127.0.0.1 - - [24/May/2021 15:59:58] "POST /longpolling/poll HTTP/1.1" 200 -
2021-05-24 16:00:03,261 2576 DEBUG ? odoo.service.server: WorkerCron (2576) polling for jobs
2021-05-24 16:00:03,316 2576 DEBUG ? odoo.tools.translate: translation went wrong for "'Selecting the "Warning" option will notify user with the message, Selecting "Blocking Message" will throw an exception with the message and block the flow. The Message has to be written in the next field.'", skipped
2021-05-24 16:00:03,376 2576 WARNING ? odoo.addons.base.ir.ir_cron: Skipping database kwspl because of modules to install/upgrade/remove.
2021-05-24 16:00:03,377 2576 INFO ? odoo.sql_db: ConnectionPool(used=0/count=0/max=64): Closed 1 connections to 'dbname=kwspl user=odoo'
2021-05-24 16:00:03,377 2576 DEBUG ? odoo.service.server: WorkerCron (2576) kwspl time:0.109s mem: 220928k -> 227084k (diff: 6156k)
2021-05-24 16:00:03,377 2576 DEBUG ? odoo.service.server: WorkerCron (2576) polling for jobs
2021-05-24 16:00:03,388 2576 INFO ? odoo.sql_db: ConnectionPool(used=0/count=0/max=64): Closed 1 connections to "dbname=\\'kwspl\\' user=odoo"
2021-05-24 16:00:03,388 2576 DEBUG ? odoo.service.server: WorkerCron (2576) 'kwspl' time:0.006s mem: 227084k -> 227084k (diff: 0k)
2021-05-24 16:00:04,190 2577 DEBUG ? odoo.service.server: WorkerCron (2577) polling for jobs
2021-05-24 16:00:04,244 2577 DEBUG ? odoo.tools.translate: translation went wrong for "'Selecting the "Warning" option will notify user with the message, Selecting "Blocking Message" will throw an exception with the message and block the flow. The Message has to be written in the next field.'", skipped
2021-05-24 16:00:04,264 2577 WARNING ? odoo.addons.base.ir.ir_cron: Skipping database kwspl because of modules to install/upgrade/remove.
2021-05-24 16:00:04,264 2577 INFO ? odoo.sql_db: ConnectionPool(used=0/count=0/max=64): Closed 1 connections to 'dbname=kwspl user=odoo'
2021-05-24 16:00:04,264 2577 DEBUG ? odoo.service.server: WorkerCron (2577) kwspl time:0.068s mem: 220928k -> 227172k (diff: 6244k)
2021-05-24 16:00:04,265 2577 DEBUG ? odoo.service.server: WorkerCron (2577) polling for jobs
2021-05-24 16:00:04,274 2577 INFO ? odoo.sql_db: ConnectionPool(used=0/count=0/max=64): Closed 1 connections to "dbname=\\'kwspl\\' user=odoo"
2021-05-24 16:00:04,275 2577 DEBUG ? odoo.service.server: WorkerCron (2577) 'kwspl' time:0.006s mem: 227172k -> 227172k (diff: 0k)
2021-05-24 16:00:05,377 2571 DEBUG kwspl odoo.modules.registry: Multiprocess signaling check: [Registry - 614 -> 614] [Cache - 57570 -> 57570]
The Scheduled Actions are no longer running while the automated Tasks are working normally.
Is this problem causing this ? -> Skipping database kwspl because of modules to install/upgrade/remove.
If this is the issue, do I check which module is the culprit?
Any Guesses?
On further investigation, I found that there was an error message in logs :
crm_rma_lot_mass_return: module not found
I tried to find this module in the current directories but could not find.
So I created an Odoo scaffolding with the same name and uploaded it on the server in one of the include directories mentioned in odoo-server.conf
This solved the problem. Odoo is now executing the Scheduled Actions.
If some faced a similar problem, I would be happy to help.
Whenever I try and test using the node driver, I find at the point of notarisation, my flows will hang.
After examining the node logs, it shows that the notary's message broker was unreachable:
[INFO ] 09:33:26,653 [nioEventLoopGroup-3-3] (AMQPClient.kt:91)
netty.AMQPClient.run - Retry connect {}
[INFO ] 09:33:26,657 [nioEventLoopGroup-3-4] (AMQPClient.kt:76)
netty.AMQPClient.operationComplete - Connected to localhost:10001 {}
[INFO ] 09:33:26,658 [nioEventLoopGroup-3-4]
(AMQPChannelHandler.kt:49) O=Notary Service, L=Zurich,
C=CH.channelActive - New client connection db926eb8 from
localhost/127.0.0.1:10001 to /127.0.0.1:63781 {}
[INFO ] 09:33:26,658
[nioEventLoopGroup-3-4] (AMQPClient.kt:86)
netty.AMQPClient.operationComplete - Disconnected from localhost:10001
{}
[ERROR] 09:33:26,658 [nioEventLoopGroup-3-4]
(AMQPChannelHandler.kt:98) O=Notary Service, L=Zurich,
C=CH.userEventTriggered - Handshake failure
SslHandshakeCompletionEvent(java.nio.channels.ClosedChannelException)
{}
[INFO ] 09:33:26,659 [nioEventLoopGroup-3-4]
(AMQPChannelHandler.kt:74) O=Notary Service, L=Zurich,
C=CH.channelInactive - Closed client connection db926eb8 from
localhost/127.0.0.1:10001 to /127.0.0.1:63781 {}
[INFO ] 09:33:26,659
[nioEventLoopGroup-3-4] (AMQPBridgeManager.kt:115)
peers.DLF1ZmHt1DXc9HbxzDNm6VHduUABBbNsp7Mh4DhoBs6ifd ->
localhost:10001:O=Notary Service, L=Zurich, C=CH.onSocketConnected -
Bridge Disconnected {}
While the notary logs display the following:
[INFO ] 13:24:21,735 [main] (ActiveMQServerImpl.java:540)
core.server.internalStart - AMQ221001: Apache ActiveMQ Artemis Message
Broker version 2.2.0 [localhost,
nodeID=7b3df3b8-98aa-11e8-83bd-ead493c8221e] {}
[DEBUG] 13:24:21,735 [main] (ArtemisRpcBroker.kt:51)
rpc.ArtemisRpcBroker.start - Artemis RPC broker is started. {}
[INFO ] 13:24:21,737 [main] (ArtemisMessagingClient.kt:28)
internal.ArtemisMessagingClient.start - Connecting to message broker:
localhost:10001 {}
[ERROR] 13:24:22,298 [main] (NettyConnector.java:713)
core.client.createConnection - AMQ214016: Failed to create netty
connection {} java.nio.channels.ClosedChannelException: null
at io.netty.handler.ssl.SslHandler.channelInactive(...)(Unknown Source) ~[netty-all-4.1.9.Final.jar:4.1.9.Final]
[DEBUG] 13:24:22,362 [main] (PersistentIdentityService.kt:137)
identity.PersistentIdentityService.verifyAndRegisterIdentity -
Registering identity O=Notary Service, L=Zurich, C=CH {}
[WARN ] 13:24:22,363 [main] (AppendOnlyPersistentMap.kt:79)
utilities.AppendOnlyPersistentMapBase.set - Double insert in
net.corda.node.utilities.AppendOnlyPersistentMap for entity class
class
net.corda.node.services.identity.PersistentIdentityService$PersistentIdentity
key 69ACAA32A0C7934D9454CB53EEA6CA6CCD8E4090B30C560A5A36EA10F3DC13E8,
not inserting the second time {}
[ERROR] 13:24:22,368 [main] (NodeStartup.kt:125) internal.Node.run -
Exception during node startup {}
org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException:
AMQ119007: Cannot connect to server(s). Tried with all available
servers.
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:787)
~[artemis-core-client-2.2.0.jar:2.2.0]
at net.corda.nodeapi.internal.ArtemisMessagingClient.start(ArtemisMessagingClient.kt:39)
~[corda-node-api-3.2-corda.jar:?]
at net.corda.nodeapi.internal.bridging.AMQPBridgeManager.start(AMQPBridgeManager.kt:195)
~[corda-node-api-3.2-corda.jar:?]
at net.corda.nodeapi.internal.bridging.BridgeControlListener.start(BridgeControlListener.kt:35)
~[corda-node-api-3.2-corda.jar:?]
at net.corda.node.internal.Node.startMessagingService(Node.kt:301) ~[corda-node-3.2-corda.jar:?]
How do I fix this?
IntelliJ Ultimate ships with the Yourkit profiler, which by default starts when IntelliJ starts and listens on port 100001 - the default port for the Notary in Driver.
You can locate the config for this using here and alter it to use a different port as per this
Your new config line will look something like this:
-agentlib:yjpagent=delay=10000,probe_disable=*,port=30000
There doesn't seem to be much out there right now on how to properly add WSS support to an Autobahn/Twisted setup. I'm starting with the Crossbar serial2ws example, which shows a WS-based connection between frontend and backend.
I'd like to know how to adapt the serial2ws example for an SSL connection.
I changed:
# serial2ws.py
router = args.router or 'ws://localhost:8080'
to
router = args.router or 'wss://localhost:8080'
And on the website JS:
connection = new autobahn.Connection({
url: (document.location.protocol === "http:" ? "ws:" : "wss:") + "//" + ip + ":" + port,
realm: 'realm1',
...
})
But, when I try to connect, it fails with:
WebSocket connection to 'wss://192.168.0.12:8080/' failed: Error in connection establishment: net::ERR_CONNECTION_CLOSED
The Python server logs:
2016-06-30 16:52:57-0400 [-] Log opened.
2016-06-30 16:52:57-0400 [-] Using Twisted reactor <class
'twisted.internet.epollreactor.EPollReactor'>
2016-06-30 16:52:59-0400 [-] WampWebSocketServerFactory starting on 8080
2016-06-30 16:52:59-0400 [-] Starting factory <autobahn.twisted.websocket.WampWebSocketServerFactory instance at 0x76669dc8>
2016-06-30 16:53:00-0400 [-] Starting factory <autobahn.twisted.websocket.WampWebSocketClientFactory instance at 0x766112b0>
2016-06-30 16:53:05-0400 [WampWebSocketClientProtocol (TLSMemoryBIOProtocol),client] Stopping factory <autobahn.twisted.websocket.WampWebSocketClientFactory instance at 0x766112b0>
To be clear, when the "wss" instances above are reverted to the original "ws", everything works.
Also tried:
Adding to serial2ws.py:
contextFactory = ssl.DefaultOpenSSLContextFactory('/root/keys/server.key', '/root/keys/server.crt')
# Change
reactor.listenTCP(args.web, Site(File(".")))
# to
reactor.listenSSL(args.web, Site(File(".")), contextFactory)