Cant get Celery/RabbitMQ to run my shared task in Django - rabbitmq

I have an application that is already deployed and working in production however this was all done by someone else. I am now attempting to make a local version of the environment and I can't seem to get my local Celery/RabbitMQ to actually run the task.
The application is very large, so i won't attempt to post it all here but i have a few clues from my debugging that may be useful. One is this. When i run the following functions:
task_id = celery_send_playbook_msg_util.apply_async([brand_user.id, pb['id'], sequence_id, '', False, False, message_id,
pb['playbook'], event_type == constants.event_types['Abandoned']],
eta=delivery_datetime, queue='high_priority', priority=8)
print("Celery Task ID: " + str(task_id))
I actually do get a UUID style task_id in return. This indicates to me that the Celery Broker is running. Also I have tried the following configuration options for celery broker (so far none have worked)
#BROKER_URL = 'amqp://test:test#192.168.33.10:5672//'
#BROKER_URL = 'amqp://test:test#localhost:5672//'
#BROKER_URL = 'amqp://test:test#localhost//'
BROKER_URL = 'amqp://test:test#192.168.33.10//'
Other clues:
It occurs to me that it might be helpful to see the output of the command i used to initiate the workers so here it is:
celery -A Python worker --loglevel=debug
-------------- celery#vagrant v4.2.1 (windowlicker)
---- **** -----
--- * *** * -- Linux-4.15.0-29-generic-x86_64-with-Ubuntu-18.04-bionic 2019-08-02 20:42:29
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: Python:0x7fa1367f0650
- ** ---------- .> transport: amqp://test:**#192.168.33.10:5672//
- ** ---------- .> results: rpc://
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. Python.celery.debug_task
. Python.celery.send_messages_daily_unreadcount
. Sensus.tasks.bulk_manual_optin_from_csv_task
. Sensus.tasks.celery_csv_upload_send_message
. Sensus.tasks.celery_send_messages_daily_util
. Sensus.tasks.celery_send_msg_util
. Sensus.tasks.celery_send_payment_message
. Sensus.tasks.celery_send_playbook_msg_util
. Sensus.tasks.consolidate_messages_and_analyze_sentiment
. Sensus.tasks.scheduled_broadcast_task
. celery.accumulate
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
[2019-08-02 20:42:29,590: DEBUG/MainProcess] Start from server, version: 0.9, properties: {'information': 'Licensed under the MPL. See http://www.rabbitmq.com/', 'product': 'RabbitMQ', 'copyright': 'Copyright (C) 2007-2017 Pivotal Software, Inc.', 'capabilities': {'exchange_exchange_bindings': True, 'connection.blocked': True, 'authentication_failure_close': True, 'direct_reply_to': True, 'basic.nack': True, 'per_consumer_qos': True, 'consumer_priorities': True, 'consumer_cancel_notify': True, 'publisher_confirms': True}, 'cluster_name': 'rabbit#vagrant.vm', 'platform': 'Erlang/OTP', 'version': '3.6.10'}, mechanisms: ['PLAIN', 'AMQPLAIN'], locales: [u'en_US']
[2019-08-02 20:42:29,592: INFO/MainProcess] Connected to amqp://test:**#192.168.33.10:5672//
[2019-08-02 20:42:29,601: DEBUG/MainProcess] Start from server, version: 0.9, properties: {'information': 'Licensed under the MPL. See http://www.rabbitmq.com/', 'product': 'RabbitMQ', 'copyright': 'Copyright (C) 2007-2017 Pivotal Software, Inc.', 'capabilities': {'exchange_exchange_bindings': True, 'connection.blocked': True, 'authentication_failure_close': True, 'direct_reply_to': True, 'basic.nack': True, 'per_consumer_qos': True, 'consumer_priorities': True, 'consumer_cancel_notify': True, 'publisher_confirms': True}, 'cluster_name': 'rabbit#vagrant.vm', 'platform': 'Erlang/OTP', 'version': '3.6.10'}, mechanisms: ['PLAIN', 'AMQPLAIN'], locales: [u'en_US']
[2019-08-02 20:42:29,603: INFO/MainProcess] mingle: searching for neighbors
[2019-08-02 20:42:29,604: DEBUG/MainProcess] using channel_id: 1
[2019-08-02 20:42:29,606: DEBUG/MainProcess] Channel open
[2019-08-02 20:42:29,621: DEBUG/MainProcess] Start from server, version: 0.9, properties: {'information': 'Licensed under the MPL. See http://www.rabbitmq.com/', 'product': 'RabbitMQ', 'copyright': 'Copyright (C) 2007-2017 Pivotal Software, Inc.', 'capabilities': {'exchange_exchange_bindings': True, 'connection.blocked': True, 'authentication_failure_close': True, 'direct_reply_to': True, 'basic.nack': True, 'per_consumer_qos': True, 'consumer_priorities': True, 'consumer_cancel_notify': True, 'publisher_confirms': True}, 'cluster_name': 'rabbit#vagrant.vm', 'platform': 'Erlang/OTP', 'version': '3.6.10'}, mechanisms: ['PLAIN', 'AMQPLAIN'], locales: [u'en_US']
[2019-08-02 20:42:29,623: DEBUG/MainProcess] using channel_id: 1
[2019-08-02 20:42:29,624: DEBUG/MainProcess] Channel open
[2019-08-02 20:42:30,630: INFO/MainProcess] mingle: all alone
[2019-08-02 20:42:30,636: DEBUG/MainProcess] using channel_id: 2
[2019-08-02 20:42:30,637: DEBUG/MainProcess] Channel open
[2019-08-02 20:42:30,641: DEBUG/MainProcess] using channel_id: 3
[2019-08-02 20:42:30,642: DEBUG/MainProcess] Channel open
[2019-08-02 20:42:30,645: DEBUG/MainProcess] using channel_id: 1
[2019-08-02 20:42:30,646: DEBUG/MainProcess] Channel open
[2019-08-02 20:42:30,649: WARNING/MainProcess] /home/vagrant/.local/lib/python2.7/site-packages/celery/fixups/django.py:200: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
warnings.warn('Using settings.DEBUG leads to a memory leak, never '
[2019-08-02 20:42:30,650: INFO/MainProcess] celery#vagrant ready.
[2019-08-02 20:42:30,651: DEBUG/MainProcess] basic.qos: prefetch_count->4
[2019-08-02 20:42:50,649: DEBUG/MainProcess] heartbeat_tick : for connection bcbf34af62f3488c8bbcee3f18b42621
[2019-08-02 20:42:50,651: DEBUG/MainProcess] heartbeat_tick : Prev sent/recv: None/None, now - 28/58, monotonic - 11221.8028604, last_heartbeat_sent - 11221.8028469, heartbeat int. - 60 for connection bcbf34af62f3488c8bbcee3f18b42621
[2019-08-02 20:43:10,655: DEBUG/MainProcess] heartbeat_tick : for connection bcbf34af62f3488c8bbcee3f18b42621
[2019-08-02 20:43:10,656: DEBUG/MainProcess] heartbeat_tick : Prev sent/recv: 28/58, now - 28/88, monotonic - 11241.8086932, last_heartbeat_sent - 11221.8028469, heartbeat int. - 60 for connection bcbf34af62f3488c8bbcee3f18b42621

So i figured it out. Here is how.
First i used the following command to see what was in the Queue:
sudo rabbitmqctl list_queues
this gave me the following output:
Listing queues
d68c3a7d-ed35-3c79-b571-0d01ccda84ad 1
2753309c-9f03-399c-871d-5b4ffcbea462 0
high_priority 23
8ce8d7e0-0081-3937-80fb-ff238be8f410 1
4ce2ecce-6954-3c07-857a-4221fe613e72 0
celery 0
celery#vagrant.celery.pidbox 0
celeryev.1a7429e0-48b2-4ead-925c-42ee1855247d 0
8127f8e8-073c-3972-a563-829ab207b964 0
I was curious what the 23 was next to 'high_priority' and i noticed that it kept going up every time i tried something that should have been put in the queue. As it turns out, in my application when we put something into the queue we don't just put it into the general queue, we put it into one that we have named 'high_priority'. Because i did not notice this i was starting my worker to look at the general queue. to solve this problem i added a -Q option to the worker call like so:
celery -A Python worker --loglevel=debug -Q high_priority
And this solved the problem

Related

Error using framework hyperledger caliper

I have two test tests, one reading and one writing on the blockchain. I'm getting two different errors, one at the start of the test and one at the writing test. The reading test is working normally without problems.
Initial error:
2021.12.21-16:43:01.870 info [caliper] [round-orchestrator] Preparing worker connections
2021.12.21-16:43:01.870 info [caliper] [worker-orchestrator] Launching worker 1 of 2
2021.12.21-16:43:01.878 info [caliper] [worker-orchestrator] Launching worker 2 of 2
2021.12.21-16:43:01.884 info [caliper] [worker-orchestrator] Messenger not configured, entering configure phase...
2021.12.21-16:43:01.885 info [caliper] [worker-orchestrator] No existing workers detected, entering worker launch phase...
2021.12.21-16:43:01.885 info [caliper] [worker-orchestrator] Waiting for 2 workers to be connected...
2021.12.21-16:43:02.426 info [caliper] [cli-launch-worker] Set workspace path: /home/ubuntu/caliper/caliper-benchmarks/monitor
2021.12.21-16:43:02.427 info [caliper] [cli-launch-worker] Set benchmark configuration path: /home/ubuntu/caliper/caliper-benchmarks/monitor/config.yaml
2021.12.21-16:43:02.427 info [caliper] [cli-launch-worker] Set network configuration path: /home/ubuntu/caliper/caliper-benchmarks/monitor/network.yaml
2021.12.21-16:43:02.427 info [caliper] [cli-launch-worker] Set SUT type: fabric
2021.12.21-16:43:02.444 info [caliper] [cli-launch-worker] Set workspace path: /home/ubuntu/caliper/caliper-benchmarks/monitor
2021.12.21-16:43:02.446 info [caliper] [cli-launch-worker] Set benchmark configuration path: /home/ubuntu/caliper/caliper-benchmarks/monitor/config.yaml
2021.12.21-16:43:02.446 info [caliper] [cli-launch-worker] Set network configuration path: /home/ubuntu/caliper/caliper-benchmarks/monitor/network.yaml
2021.12.21-16:43:02.447 info [caliper] [cli-launch-worker] Set SUT type: fabric
2021.12.21-16:43:02.505 info [caliper] [worker-orchestrator] 2 workers connected, progressing to worker assignment phase.
2021.12.21-16:43:02.505 info [caliper] [worker-orchestrator] Workers currently unassigned, awaiting index assignment...
2021.12.21-16:43:02.506 info [caliper] [worker-orchestrator] Waiting for 2 workers to be assigned...
2021.12.21-16:43:02.559 info [caliper] [worker-orchestrator] 2 workers assigned, progressing to worker initialization phase.
2021.12.21-16:43:02.560 info [caliper] [worker-orchestrator] Waiting for 2 workers to be ready...
2021.12.21-16:43:03.629 info [caliper] [worker-message-handler] Initializing Worker#1...
2021.12.21-16:43:03.629 info [caliper] [fabric-connector] Initializing gateway connector compatible with installed SDK: 2.2.3
2021.12.21-16:43:03.629 info [caliper] [IdentityManager] Adding User1 (admin=false) as User1 for organization Org1MSP
2021.12.21-16:43:03.629 info [caliper] [worker-message-handler] Worker#1 initialized
2021.12.21-16:43:03.683 info [caliper] [worker-orchestrator] 2 workers ready, progressing to test preparation phase.
2021.12.21-16:43:03.684 info [caliper] [round-orchestrator] Started round 1 (Set)
2021.12.21-16:43:03.690 info [caliper] [worker-message-handler] Preparing Worker#1 for Round#0
2021.12.21-16:43:03.696 info [caliper] [connectors/v2/FabricGateway] Connecting user with identity User1 to a Network Gateway
2021.12.21-16:43:04.005 info [caliper] [worker-message-handler] Initializing Worker#0...
2021.12.21-16:43:04.005 info [caliper] [fabric-connector] Initializing gateway connector compatible with installed SDK: 2.2.3
2021.12.21-16:43:04.005 info [caliper] [IdentityManager] Adding User1 (admin=false) as User1 for organization Org1MSP
2021.12.21-16:43:04.005 info [caliper] [worker-message-handler] Worker#0 initialized
2021.12.21-16:43:04.006 info [caliper] [worker-message-handler] Preparing Worker#0 for Round#0
2021.12.21-16:43:04.006 info [caliper] [connectors/v2/FabricGateway] Connecting user with identity User1 to a Network Gateway
2021.12.21-16:43:04.007 info [caliper] [connectors/v2/FabricGateway] Successfully connected user with identity User1 to a Network Gateway
2021.12.21-16:43:04.008 info [caliper] [connectors/v2/FabricGateway] Generating contract map for user User1
2021.12.21-16:43:04.018 info [caliper] [connectors/v2/FabricGateway] Successfully connected user with identity User1 to a Network Gateway
2021.12.21-16:43:04.019 info [caliper] [connectors/v2/FabricGateway] Generating contract map for user User1
2021-12-21T16:43:07.083Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Committer- name: orderer.example.com:7050, url:grpc://localhost:7050, connected:false, connectAttempted:true
2021-12-21T16:43:07.086Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server orderer.example.com:7050 url:grpc://localhost:7050 timeout:3000
2021-12-21T16:43:07.088Z - error: [DiscoveryService]: _buildOrderer[channelall] - Unable to connect to the discovered orderer orderer.example.com:7050 due to Error: Failed to connect before the deadline on Committer- name: orderer.example.com:7050, url:grpc://localhost:7050, connected:false, connectAttempted:true
2021-12-21T16:43:07.085Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Committer- name: orderer.example.com:7050, url:grpc://localhost:7050, connected:false, connectAttempted:true
2021-12-21T16:43:07.090Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server orderer.example.com:7050 url:grpc://localhost:7050 timeout:3000
2021-12-21T16:43:07.092Z - error: [DiscoveryService]: _buildOrderer[channelall] - Unable to connect to the discovered orderer orderer.example.com:7050 due to Error: Failed to connect before the deadline on Committer- name: orderer.example.com:7050, url:grpc://localhost:7050, connected:false, connectAttempted:true
The second error that occurs in the writing test is the following:
2021.12.21-16:43:07.112 info [caliper] [worker-orchestrator] 2 workers prepared, progressing to test phase.
2021.12.21-16:43:07.112 info [caliper] [round-orchestrator] Monitors successfully started
2021.12.21-16:43:07.115 info [caliper] [worker-message-handler] Worker#1 is starting Round#0
2021.12.21-16:43:07.116 info [caliper] [worker-message-handler] Worker#0 is starting Round#0
2021.12.21-16:43:07.123 info [caliper] [caliper-worker] Worker #1 starting workload loop
2021.12.21-16:43:07.126 info [caliper] [caliper-worker] Worker #0 starting workload loop
2021.12.21-16:43:07.941 error [caliper] [connectors/v2/FabricGateway] Failed to perform submit transaction [set] using arguments [node1,{'CPU':50,'MEM':50,'STG':50.0,'DAT':'2020-11-17T00:10:00Z'}], with error: Error: No endorsement plan available
at DiscoveryHandler.endorse (/home/ubuntu/caliper/node_modules/fabric-network/node_modules/fabric-common/lib/DiscoveryHandler.js:208:10)
at process._tickCallback (internal/process/next_tick.js:68:7)
Connection File
---
name: fabric
version: 2.0.0
client:
organization: Org1
connection:
timeout:
peer:
endorser: '300'
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
certificateAuthorities:
- ca.org1.example.com
orderers:
orderer.example.com:
url: grpc://192.169.0.9:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
peers:
peer0.org1.example.com:
url: grpc://192.169.0.7:7051
tlsCACerts:
path: peerOrganizations/org1.example.com/tlsca/tlsca.org1.example.com-cert.pem
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
certificateAuthorities:
ca.org1.example.com:
url: https://192.169.0.7:7054
caName: ca-org1
tlsCACerts:
path: peerOrganizations/org1.example.com/tlsca/tlsca.org1.example.com-cert.pem
httpOptions:
verify: false
Network File
name: Fabric
version: '2.0.0'
caliper:
blockchain: fabric
sutOptions:
mutualTls: false
organizations:
- mspid: Org1MSP
identities:
certificates:
- name: 'User1'
clientPrivateKey:
path: 'peerOrganizations/org1.example.com/users/User1#org1.example.com/msp/keystore/priv_sk'
clientSignedCert:
path: 'peerOrganizations/org1.example.com/users/User1#org1.example.com/msp/signcerts/User1#org1.example.com-cert.pem'
connectionProfile:
path: 'connection_files/connection-org1.yaml'
discover: true
orderers:
orderer.example.com:
url: grpc://192.169.0.9:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
channels:
- channelName: channelall
contracts:
- id: monitor
I kindly ask for any tips so that I can solve these problems and follow the development.
From the network file you posted a couple of points
you can't define any nodes in it (for example you've added orderers). They are ignored
you've specified that your connection profile is a dynamic profile by setting discover to true in your network file, this means it will use discovery to determine the network topology and may not use the nodes you have explicitly defined in your connection profile. If you want to be explicit in your connection profile (and thus define a static connection profile) like you have in your above example, you should set discover to false, which hopefully will solve your problem.
As a side note, if you use discovery then the node-sdk (used by caliper) and caliper by default converts all discovered node hosts to localhost, which is why you see it trying to contact localhost. To disable this see Runtime settings in https://hyperledger.github.io/caliper/v0.4.2/fabric-config/new/
The issue was in the connection file.
The old file was:
---
name: fabric
version: 2.0.0
client:
organization: Org1
connection:
timeout:
peer:
endorser: '300'
orderer: '10000'
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
certificateAuthorities:
- ca.org1.example.com
orderers:
orderer.example.com:
url: grpc://192.169.0.9:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
peers:
peer0.org1.example.com:
url: grpc://192.169.0.7:7051
tlsCACerts:
path: crypto-config/peerOrganizations/org1.example.com/tlsca/tlsca.org1.example.com-cert.pem
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
certificateAuthorities:
ca.org1.example.com:
url: http://192.169.0.7:7054
caName: ca-org1
tlsCACerts:
path: crypto-config/peerOrganizations/org1.example.com/tlsca/tlsca.org1.example.com-cert.pem
httpOptions:
verify: false
The new connection file that I created is this:
---
name: fabric
description: "Sample connection profile for documentation topic"
version: 2.0.0
channels:
channelall:
orderers:
- orderer.example.com
peers:
peer0.org1.example.com:
endorsingPeer: true
chaincodeQuery: true
ledgerQuery: true
eventSource: true
peer0.org2.example.com:
endorsingPeer: true
chaincodeQuery: true
ledgerQuery: true
eventSource: true
peer0.org3.example.com:
endorsingPeer: false
chaincodeQuery: false
ledgerQuery: true
eventSource: true
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
Org2:
mspid: Org2MSP
peers:
- peer0.org2.example.com
Org3:
mspid: Org3MSP
peers:
- peer0.org3.example.com
orderers:
orderer.example.com:
url: grpc://192.169.0.9:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
peers:
peer0.org1.example.com:
url: grpc://192.169.0.7:7051
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
request-timeout: 120001
peer0.org2.example.com:
url: grpc://192.169.0.10:7051
grpcOptions:
ssl-target-name-override: peer0.org2.example.com
request-timeout: 120001
peer0.org3.example.com:
url: grpc://192.169.0.11:7051
grpcOptions:
ssl-target-name-override: peer0.org3.example.com
request-timeout: 120001
With all information about peers and orderers.
Thanks everybody for the help.

webdriverIO and saucelab integration not working

When I tried connecting webdriverIO with sauce labs, I'm getting error as
I'm using sauce service
$ ./node_modules/.bin/wdio ./config/int.wdio.conf.js
Execution of 1 spec files started at 2020-08-30T20:11:14.720Z
2020-08-30T20:11:14.768Z INFO #wdio/cli:launcher: Run onPrepare hook
2020-08-30T20:11:14.770Z INFO #wdio/cli:launcher: Run onWorkerStart hook
2020-08-30T20:11:14.772Z INFO #wdio/local-runner: Start worker 0-0 with arg: ./config/int.wdio.conf.js
[0-0] 2020-08-30T20:11:15.414Z INFO #wdio/local-runner: Run worker command: run
[0-0] 2020-08-30T20:11:15.422Z INFO webdriverio: Initiate new session using the ./protocol-stub protocol
[0-0] RUNNING in chrome - /e2e/test/login.int.spec.js
[0-0] 2020-08-30T20:11:15.704Z INFO webdriverio: Initiate new session using the webdriver protocol
[0-0] 2020-08-30T20:11:15.706Z INFO webdriver: [POST] https://ondemand.saucelabs.com/wd/hub/session
[0-0] 2020-08-30T20:11:15.706Z INFO webdriver: DATA {
capabilities: {
alwaysMatch: { browserName: 'chrome', acceptInsecureCerts: true },
firstMatch: [ {} ]
},
desiredCapabilities: { browserName: 'chrome', acceptInsecureCerts: true }
}
[0-0] 2020-08-30T20:11:15.733Z ERROR webdriver: RequestError: connect ECONNREFUSED 127.0.0.1:443
at ClientRequest.<anonymous> (/node_modules/got/dist/source/core/index.js:891:25)
at Object.onceWrapper (events.js:418:26)
at ClientRequest.emit (events.js:323:22)
at ClientRequest.EventEmitter.emit (domain.js:482:12)
at ClientRequest.origin.emit (/test/node_modules/#szmarczak/http-timer/dist/source/index.js:39:20)
at TLSSocket.socketErrorListener (_http_client.js:426:9)
at TLSSocket.emit (events.js:311:20)
at TLSSocket.EventEmitter.emit (domain.js:482:12)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1137:16)
[0-0] 2020-08-30T20:11:15.733Z ERROR #wdio/runner: Error: Failed to create session.
Unable to connect to "https://ondemand.saucelabs.com:443/wd/hub", make sure browser driver is running on that address.
If you use services like chromedriver see initialiseServices logs above or in wdio.log file as the service might had problems to start the driver.
at startWebDriverSession (/test/node_modules/webdriver/build/utils.js:45:11)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
[0-0] Error: Failed to create session.
Unable to connect to "https://ondemand.saucelabs.com:443/wd/hub", make sure browser driver is running on that address.
If you use services like chromedriver see initialiseServices logs above or in wdio.log file as the service might had problems to start the driver.
[0-0] FAILED in chrome - /e2e/test/login.int.spec.js
2020-08-30T20:11:15.848Z INFO #wdio/cli:launcher: Run onComplete hook
Spec Files: 0 passed, 1 failed, 1 total (100% completed) in 00:00:01
2020-08-30T20:11:15.849Z INFO #wdio/local-runner: Shutting down spawned worker
2020-08-30T20:11:16.102Z INFO #wdio/local-runner: Waiting for 0 to shut down gracefully
2020-08-30T20:11:16.102Z INFO #wdio/local-runner: shutting down
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
my wdio.config.js looks like
exports.config = {
runner: 'local',
user: 'test123',
key: '65434d10-276f-4305-adb4-93a39848d445',
region: 'us',
specs: [
'./e2e/test/*.int.spec.js'
],
exclude: [
'./e2e/test/*.prod.spec.js',
'./e2e/test/*.ie.int.spec.js'
],
maxInstances: 1,
capabilities: [
{
maxInstances: 1,
browserName: 'chrome',
acceptInsecureCerts: true
}
],
logLevel: 'info',
bail: 0,
baseUrl: 'https://www.google.com',
waitforTimeout: 10000,
connectionRetryTimeout: 120000,
connectionRetryCount: 3,
services: ['sauce'],
framework: 'mocha',
reporters: ['mochawesome'],
mochaOpts: {
ui: 'bdd',
timeout: 60000
}
}
any idea how to fix this...

How to purge test sessions in Botium

I'm trying to run Test Set but it looks like it is stucked. All 5 agents are in use and I cannot delete them. Tests fail on ESOCKETTIMEDOUT. I could run same test without problems before.
I tried to click on "Send cancellation request" to Test Session in danger zone to kill it, but I still can't delete agents (Delete Botium Agent (only possible if not already used)).
Botium agents pic
Log from Botium:
2019-05-31T08:31:13.892Z: Job queued for execution
2019-05-31T08:31:13.886Z: 2019-05-31T08:31:13.378Z botium-box-worker-runtestcases Started processing, JobId #952.
2019-05-31T08:31:14.077Z: 2019-05-31T08:31:13.382Z botium-BotDriver Loaded Botium configuration file ./botium.json
2019-05-31T08:31:14.143Z: 2019-05-31T08:31:13.388Z botium-BotDriver BuildCompiler: Capabilites: { PROJECTNAME: 'TM new - Test Session',
TEMPDIR: 'botiumwork',
CLEANUPTEMPDIR: true,
WAITFORBOTTIMEOUT: 10000,
SIMULATE_WRITING_SPEED: false,
DOCKERCOMPOSEPATH: 'docker-compose',
DOCKERMACHINEPATH: 'docker-machine',
DOCKERMACHINE: false,
DOCKERIMAGE: 'node:boron',
DOCKERUNIQUECONTAINERNAMES: false,
DOCKERSYSLOGPORT_RANGE: '47100-47299',
BOT_HEALTH_STATUS: 200,
SLACK_PUBLISHPORT_RANGE: '46100-46299',
FACEBOOK_PUBLISHPORT_RANGE: '46300-46499',
FACEBOOK_SEND_DELIVERY_CONFIRMATION: true,
BOTFRAMEWORK_PUBLISHPORT_RANGE: '46500-46699',
BOTFRAMEWORK_WEBHOOK_PORT: 3978,
BOTFRAMEWORK_WEBHOOK_PATH: 'api/messages',
BOTFRAMEWORK_CHANNEL_ID: 'facebook',
SIMPLEREST_PING_RETRIES: 6,
SIMPLEREST_PING_TIMEOUT: 10000,
SIMPLEREST_PING_VERB: 'GET',
SIMPLEREST_METHOD: 'GET',
WEBSPEECH_SERVER_PORT: 46050,
WEBSPEECH_LANGUAGE: 'en-US',
WEBSPEECH_CLOSEBROWSER: true,
SCRIPTING_TXT_EOL: '\n',
SCRIPTING_XLSX_EOL_SPLIT: '\r',
SCRIPTING_XLSX_EOL_WRITE: '\r\n',
SCRIPTING_XLSX_STARTROW: 2,
SCRIPTING_XLSX_STARTCOL: 1,
SCRIPTING_NORMALIZE_TEXT: false,
SCRIPTING_ENABLE_MEMORY: false,
SCRIPTING_MATCHING_MODE: 'includeLowerCase',
SCRIPTING_UTTEXPANSION_MODE: 'all',
SCRIPTING_UTTEXPANSION_RANDOM_COUNT: 1,
SCRIPTING_MEMORYEXPANSION_KEEP_ORIG: false,
RETRY_USERSAYS_ONERROR_REGEXP: [],
RETRY_USERSAYS_NUMRETRIES: 1,
RETRY_USERSAYS_FACTOR: 1,
RETRY_USERSAYS_MINTIMEOUT: 1000,
ASSERTERS:
[ { ref: 'HASLINK',
src: 'botium-asserter-basiclink',
global: false,
args: null } ],
LOGIC_HOOKS: [],
USER_INPUTS: [],
CONTAINERMODE: 'webdriverio',
WEBDRIVERIO_URL: 'https://chat.t-mobile.cz/chat/',
WEBDRIVERIO_PROFILE: '',
WEBDRIVERIO_INPUT_ELEMENT: '<input />',
WEBDRIVERIO_INPUT_ELEMENT_VISIBLE_TIMEOUT: 10000,
WEBDRIVERIO_OUTPUT_ELEMENT:
"//div[#class='gaid-text-message gaid-text-message--isBot'][position()=last()-1]//p",
WEBDRIVERIO_IGNOREUPFRONTMESSAGES: false,
WEBDRIVERIO_USERNAME: '',
WEBDRIVERIO_PASSWORD: '',
WEBDRIVERIO_SCREENSHOTS: 'onstop',
FBPAGERECEIVER_REDISURL: { port: '6379', host: 'redis', db: 0, options: {} },
WEBDRIVERIO_OPTIONS:
{ desiredCapabilities: { browserName: 'chrome', name: 'TM new - Test Session' },
protocol: 'http',
host: '192.168.99.100',
port: '4444',
path: '/wd/hub' } }
2019-05-31T08:31:14.169Z: 2019-05-31T08:31:13.393Z botium-ScriptingProvider Using matching mode: includeLowerCase
2019-05-31T08:31:14.214Z: 2019-05-31T08:31:13.396Z botium-asserterUtils Loaded Default asserter - [ 'BUTTONS',
'MEDIA',
'PAUSE_ASSERTER',
'ENTITIES',
'ENTITY_VALUES',
'INTENT',
'INTENT_CONFIDENCE' ]
2019-05-31T08:31:14.251Z: 2019-05-31T08:31:13.402Z botium-asserterUtils Loaded Default logic hook - [ 'PAUSE',
'WAITFORBOT',
'SET_SCRIPTING_MEMORY',
'CLEAR_SCRIPTING_MEMORY',
'INCLUDE' ]
2019-05-31T08:31:14.339Z: 2019-05-31T08:31:13.403Z botium-asserterUtils Loaded Default user input - [ 'BUTTON', 'MEDIA', 'FORM' ]
2019-05-31T08:31:14.396Z: 2019-05-31T08:31:13.407Z botium-asserterUtils Trying to load HASLINK asserter from botium-asserter-basiclink
2019-05-31T08:31:14.433Z: 2019-05-31T08:31:13.410Z botium-asserterUtils Loaded HASLINK SUCCESSFULLY
2019-05-31T08:31:14.470Z: 2019-05-31T08:31:13.504Z botium-box-worker-runtestcases found 1 convos ...
2019-05-31T08:31:14.512Z: 2019-05-31T08:31:13.504Z botium-box-worker-runtestcases batchNum: 1 batchCount: 1 convosPerBatch: 1 batchStart: 0 batchEnd: 0 batchLength: 1
2019-05-31T08:31:14.548Z: 2019-05-31T08:31:13.507Z botium-BotDriver Build - Botium Core Version: 1.4.14
2019-05-31T08:31:14.586Z: 2019-05-31T08:31:13.510Z botium-BotDriver Build - Capabilites: { PROJECTNAME: 'TM new - Test Session',
TEMPDIR: 'botiumwork',
CLEANUPTEMPDIR: true,
WAITFORBOTTIMEOUT: 10000,
SIMULATE_WRITING_SPEED: false,
DOCKERCOMPOSEPATH: 'docker-compose',
DOCKERMACHINEPATH: 'docker-machine',
DOCKERMACHINE: false,
DOCKERIMAGE: 'node:boron',
DOCKERUNIQUECONTAINERNAMES: false,
DOCKERSYSLOGPORT_RANGE: '47100-47299',
BOT_HEALTH_STATUS: 200,
SLACK_PUBLISHPORT_RANGE: '46100-46299',
FACEBOOK_PUBLISHPORT_RANGE: '46300-46499',
FACEBOOK_SEND_DELIVERY_CONFIRMATION: true,
BOTFRAMEWORK_PUBLISHPORT_RANGE: '46500-46699',
BOTFRAMEWORK_WEBHOOK_PORT: 3978,
BOTFRAMEWORK_WEBHOOK_PATH: 'api/messages',
BOTFRAMEWORK_CHANNEL_ID: 'facebook',
SIMPLEREST_PING_RETRIES: 6,
SIMPLEREST_PING_TIMEOUT: 10000,
SIMPLEREST_PING_VERB: 'GET',
SIMPLEREST_METHOD: 'GET',
WEBSPEECH_SERVER_PORT: 46050,
WEBSPEECH_LANGUAGE: 'en-US',
WEBSPEECH_CLOSEBROWSER: true,
SCRIPTING_TXT_EOL: '\n',
SCRIPTING_XLSX_EOL_SPLIT: '\r',
SCRIPTING_XLSX_EOL_WRITE: '\r\n',
SCRIPTING_XLSX_STARTROW: 2,
SCRIPTING_XLSX_STARTCOL: 1,
SCRIPTING_NORMALIZE_TEXT: false,
SCRIPTING_ENABLE_MEMORY: false,
SCRIPTING_MATCHING_MODE: 'includeLowerCase',
SCRIPTING_UTTEXPANSION_MODE: 'all',
SCRIPTING_UTTEXPANSION_RANDOM_COUNT: 1,
SCRIPTING_MEMORYEXPANSION_KEEP_ORIG: false,
RETRY_USERSAYS_ONERROR_REGEXP: [],
RETRY_USERSAYS_NUMRETRIES: 1,
RETRY_USERSAYS_FACTOR: 1,
RETRY_USERSAYS_MINTIMEOUT: 1000,
ASSERTERS:
[ { ref: 'HASLINK',
src: 'botium-asserter-basiclink',
global: false,
args: null } ],
LOGIC_HOOKS: [],
USER_INPUTS: [],
CONTAINERMODE: 'webdriverio',
WEBDRIVERIO_URL: 'https://chat.t-mobile.cz/chat/',
WEBDRIVERIO_PROFILE: '',
WEBDRIVERIO_INPUT_ELEMENT: '<input />',
WEBDRIVERIO_INPUT_ELEMENT_VISIBLE_TIMEOUT: 10000,
WEBDRIVERIO_OUTPUT_ELEMENT:
"//div[#class='gaid-text-message gaid-text-message--isBot'][position()=last()-1]//p",
WEBDRIVERIO_IGNOREUPFRONTMESSAGES: false,
WEBDRIVERIO_USERNAME: '',
WEBDRIVERIO_PASSWORD: '',
WEBDRIVERIO_SCREENSHOTS: 'onstop',
FBPAGERECEIVER_REDISURL: { port: '6379', host: 'redis', db: 0, options: {} },
WEBDRIVERIO_OPTIONS:
{ desiredCapabilities: { browserName: 'chrome', name: 'TM new - Test Session' },
protocol: 'http',
host: '192.168.99.100',
port: '4444',
path: '/wd/hub' } }
2019-05-31T08:31:14.636Z: 2019-05-31T08:31:13.519Z botium-BotDriver Build - Sources : { LOCALPATH: '.',
GITPATH: 'git',
GITBRANCH: 'master',
GITDIR: '.' }
2019-05-31T08:31:14.671Z: 2019-05-31T08:31:13.524Z botium-BotDriver Build - Envs : { IS_BOTIUM_CONTAINER: true }
2019-05-31T08:31:14.704Z: 2019-05-31T08:31:13.592Z botium-PluginConnectorContainer Invalid Botium plugin loaded from webdriverio, expected PluginVersion, PluginClass fields
2019-05-31T08:31:14.732Z: 2019-05-31T08:31:13.595Z botium-PluginConnectorContainer Botium plugin botium-connector-webdriverio loaded
2019-05-31T08:31:14.769Z: 2019-05-31T08:31:13.597Z botium-connector-webdriverio Validate called
2019-05-31T08:31:14.801Z: 2019-05-31T08:31:13.600Z botium-connector-webdriverio Build called
2019-05-31T08:31:14.837Z: 2019-05-31T08:31:13.603Z botium-connector-webdriverio Start called
2019-05-31T08:31:24.389Z: 2019-05-31T08:31:24.371Z botium-box-worker sending heartbeat ...
2019-05-31T08:36:24.471Z: 2019-05-31T08:36:24.420Z botium-box-worker sending heartbeat ...
2019-05-31T08:37:15.925Z: 2019-05-31T08:37:15.880Z botium-box-worker-runtestcases Test Session Run failed (Error: ESOCKETTIMEDOUT), doing additional BotDriver Clean.
2019-05-31T08:37:15.961Z: 2019-05-31T08:37:15.881Z botium-connector-webdriverio Clean called
2019-05-31T08:40:02.054Z: 2019-05-31T08:40:02.006Z botium-BaseContainer Cleanup rimrafing temp dir /app/agent/botiumwork/TM-new-Test-Session-20190531-083113-vI4Bx
2019-05-31T08:40:02.357Z: Job failed: Error: ESOCKETTIMEDOUT
Selenium hub log:
08:06:36.629 INFO [GridLauncherV3.parse] - Selenium server version: 3.141.59, revision: e82be7d358
08:06:36.849 INFO [GridLauncherV3.lambda$buildLaunchers$5] - Launching Selenium Grid hub on port 4444
2019-05-31 08:06:37.333:INFO::main: Logging initialized #1175ms to org.seleniumhq.jetty9.util.log.StdErrLog
08:06:38.033 INFO [Hub.start] - Selenium Grid hub is up and running
08:06:38.040 INFO [Hub.start] - Nodes should register to http://172.19.0.4:4444/grid/register/
08:06:38.040 INFO [Hub.start] - Clients should connect to http://172.19.0.4:4444/wd/hub
08:06:40.894 INFO [DefaultGridRegistry.add] - Registered a node http://172.19.0.3:5555
08:06:40.907 INFO [DefaultGridRegistry.add] - Registered a node http://172.19.0.2:5555
08:07:47.391 INFO [RequestHandler.process] - Got a request to create a new session: Capabilities {browserName: firefox, handlesAlerts: true, javascriptEnabled: true, locationContextEnabled: true, loggingPrefs: org.openqa.selenium.logging..., name: TM new - Test Session, requestOrigins: {name: webdriverio, url: http://webdriver.io, version: 4.14.4}, rotatable: true}
08:07:47.409 INFO [TestSlot.getNewSession] - Trying to create a new session on test slot {server:CONFIG_UUID=ad8a2987-e350-456e-b9cf-25ac008d5255, seleniumProtocol=WebDriver, browserName=firefox, maxInstances=1, moz:firefoxOptions={log={level=info}}, platformName=LINUX, version=67.0, applicationName=, platform=LINUX}
08:13:58.927 INFO [RequestHandler.process] - Got a request to create a new session: Capabilities {browserName: chrome, handlesAlerts: true, javascriptEnabled: true, locationContextEnabled: true, loggingPrefs: org.openqa.selenium.logging..., name: TM new - Test Session, requestOrigins: {name: webdriverio, url: http://webdriver.io, version: 4.14.4}, rotatable: true}
08:13:58.935 INFO [TestSlot.getNewSession] - Trying to create a new session on test slot {server:CONFIG_UUID=3f83f707-e0ad-406f-9081-bc7185515bdf, seleniumProtocol=WebDriver, browserName=chrome, maxInstances=1, platformName=LINUX, version=74.0.3729.169, applicationName=, platform=LINUX}
08:31:13.686 INFO [RequestHandler.process] - Got a request to create a new session: Capabilities {browserName: chrome, handlesAlerts: true, javascriptEnabled: true, locationContextEnabled: true, loggingPrefs: org.openqa.selenium.logging..., name: TM new - Test Session, requestOrigins: {name: webdriverio, url: http://webdriver.io, version: 4.14.4}, rotatable: true}
08:31:13.697 INFO [TestSlot.getNewSession] - Trying to create a new session on test slot {server:CONFIG_UUID=3f83f707-e0ad-406f-9081-bc7185515bdf, seleniumProtocol=WebDriver, browserName=chrome, maxInstances=1, platformName=LINUX, version=74.0.3729.169, applicationName=, platform=LINUX}
08:39:59.952 WARN [RequestHandler.process] - The client is gone for session ext. key b54b779b8d4cb90133cf3386ca7ef664, terminating
08:40:02.245 INFO [RequestHandler.process] - Got a request to create a new session: Capabilities {browserName: firefox, handlesAlerts: true, javascriptEnabled: true, locationContextEnabled: true, loggingPrefs: org.openqa.selenium.logging..., name: TM new - Test Session, requestOrigins: {name: webdriverio, url: http://webdriver.io, version: 4.14.4}, rotatable: true}
08:40:02.251 INFO [TestSlot.getNewSession] - Trying to create a new session on test slot {server:CONFIG_UUID=ad8a2987-e350-456e-b9cf-25ac008d5255, seleniumProtocol=WebDriver, browserName=firefox, maxInstances=1, moz:firefoxOptions={log={level=info}}, platformName=LINUX, version=67.0, applicationName=, platform=LINUX}
IP & PORTS
You can access this container using the following IP address and port:
DOCKER PORT ACCESS URL
Deleting the agent records in Botium Box doesn't help - this is just how Botium Box keeps tracks of the connected agents, it has no influence on the actual processes.
The logs you attached are not looking bad, it's just that obviously there is a problem when connecting to the Selenium hub. In case the agent processes are really stuck or crashed, you can just restart the docker containers to bring them up again.

Kafka Connect failing to read from Kafka topics over SSL

Running kafka connect in our docker-swarm, with the following compose file:
cp-kafka-connect-node:
image: confluentinc/cp-kafka-connect:5.1.0
ports:
- 28085:28085
secrets:
- kafka.truststore.jks
- source: kafka-connect-aws-credentials
target: /root/.aws/credentials
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka01:9093,kafka02:9093,kafka03:9093
CONNECT_LOG4J_ROOT_LEVEL: TRACE
CONNECT_REST_PORT: 28085
CONNECT_GROUP_ID: cp-kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: dev_cp-kafka-connect-config
CONNECT_OFFSET_STORAGE_TOPIC: dev_cp-kafka-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: dev_cp-kafka-connect-status
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: 'false'
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: 'false'
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_ADVERTISED_HOST_NAME: localhost
CONNECT_PLUGIN_PATH: /usr/share/java/
CONNECT_SECURITY_PROTOCOL: SSL
CONNECT_SSL_TRUSTSTORE_LOCATION: /run/secrets/kafka.truststore.jks
CONNECT_SSL_TRUSTSTORE_PASSWORD: ********
KAFKA_HEAP_OPTS: '-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2'
deploy:
replicas: 1
resources:
limits:
cpus: '0.50'
memory: 4gb
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 2000s
secrets:
kafka.truststore.jks:
external: true
kafka-connect-aws-credentials:
external: true
The kafka connect node starts up successfully, and I am able to set up tasks and view the status of those tasks...
The connector I setup I called kafka-sink, I created it with the following config:
"config": {
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"s3.region": "eu-central-1",
"flush.size": "1",
"schema.compatibility": "NONE",
"tasks.max": "1",
"topics": "input-topic-name",
"s3.part.size": "5242880",
"timezone": "UTC",
"directory.delim": "/",
"locale": "UK",
"s3.compression.type": "gzip",
"format.class": "io.confluent.connect.s3.format.bytearray.ByteArrayFormat",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"name": "kafka-sink",
"value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"s3.bucket.name": "my-s3-bucket",
"rotate.schedule.interval.ms": "60000"
}
This task now says that it is running.
When I did not include the SSL config, specifically:
CONNECT_BOOTSTRAP_SERVERS: kafka01:9093,kafka02:9093,kafka03:9093
CONNECT_SECURITY_PROTOCOL: SSL
CONNECT_SSL_TRUSTSTORE_LOCATION: /run/secrets/kafka.truststore.jks
CONNECT_SSL_TRUSTSTORE_PASSWORD: ********
and instead pointed to a bootstrap server that was exposed with no security:
CONNECT_BOOTSTRAP_SERVERS: insecurekafka:9092
It worked fine, and read from the appropriate input topic, and output to the S3 bucket with default partitioning...
However, when I run it using the SSL config against my secure kafka topic, it logs no errors, throws no exceptions, but does nothing at all despite data continuously being pushed to the input topic...
Am I doing something wrong?
This is my first time using Kafka Connect, normally, I connect to kafka using Spring Boot apps where you just have to specify the truststore location and password in the config.
Am I missing some configuration in either my compose file or my task config?
I think you need to add SSL config for both consumer and producer. Check here Kafka Connect Encrypt with SSL
Something like this
security.protocol=SSL
ssl.truststore.location=~/kafka.truststore.jks
ssl.truststore.password=<password>
ssl.keystore.location=~/kafka.client.keystore.jks
ssl.keystore.password=<password>
ssl.key.password=<password>
producer.security.protocol=SSL
producer.ssl.truststore.location=~/kafka.truststore.jks
producer.ssl.truststore.password=<password>
producer.ssl.keystore.location=~/kafka.client.keystore.jks
producer.ssl.keystore.password=<password>
producer.ssl.key.password=<password>

Rabbitmqctl command throws error

I am trying to create a 3 node cluster on RabbitMQ. I have the first node up and running. When I issue join cluster command from node 2, it is throwing an error that node is down.
rabbitmqctl join_cluster rabbit#hostname02
I am getting the following error:
Status of node rabbit#hostname02 ...
Error: unable to connect to node rabbit#hostname02: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#hostname02]
rabbit#hostname02:
* connected to epmd (port 4369) on hostname02
* epmd reports: node 'rabbit' not running at all
no other nodes on hostname02
* suggestion: start the node
current node details:
- node name: 'rabbitmq-cli-30#hostname02'
- home dir: /var/lib/rabbitmq
- cookie hash: bygafwoj/ISgb3yKej1pEg==
This is my config file.
[
{rabbit, [
{cluster_nodes, {[rabbit#hostname01, rabbitmq#hostname02, rabbit#hostname03], disc}},
{cluster_partition_handling, ignore},
{tcp_listen_options,
[binary,
{packet, raw},
{reuseaddr, true},
{backlog, 128},
{nodelay, true},
{exit_on_close, false}]
},
{default_user, <<"guest">>},
{default_pass, <<"guest">>},
{log_levels, [{autocluster, debug}, {connection, info}]}
]},
{kernel, [
]},
{rabbitmq_management, [
{listener, [
{port, 15672}
]}
]}
].
% EOF
I have updated the /etc/hosts file with the details of all 3 nodes on all the 3 servers. I am not sure where I am getting this wrong.