I have two test tests, one reading and one writing on the blockchain. I'm getting two different errors, one at the start of the test and one at the writing test. The reading test is working normally without problems.
Initial error:
2021.12.21-16:43:01.870 info [caliper] [round-orchestrator] Preparing worker connections
2021.12.21-16:43:01.870 info [caliper] [worker-orchestrator] Launching worker 1 of 2
2021.12.21-16:43:01.878 info [caliper] [worker-orchestrator] Launching worker 2 of 2
2021.12.21-16:43:01.884 info [caliper] [worker-orchestrator] Messenger not configured, entering configure phase...
2021.12.21-16:43:01.885 info [caliper] [worker-orchestrator] No existing workers detected, entering worker launch phase...
2021.12.21-16:43:01.885 info [caliper] [worker-orchestrator] Waiting for 2 workers to be connected...
2021.12.21-16:43:02.426 info [caliper] [cli-launch-worker] Set workspace path: /home/ubuntu/caliper/caliper-benchmarks/monitor
2021.12.21-16:43:02.427 info [caliper] [cli-launch-worker] Set benchmark configuration path: /home/ubuntu/caliper/caliper-benchmarks/monitor/config.yaml
2021.12.21-16:43:02.427 info [caliper] [cli-launch-worker] Set network configuration path: /home/ubuntu/caliper/caliper-benchmarks/monitor/network.yaml
2021.12.21-16:43:02.427 info [caliper] [cli-launch-worker] Set SUT type: fabric
2021.12.21-16:43:02.444 info [caliper] [cli-launch-worker] Set workspace path: /home/ubuntu/caliper/caliper-benchmarks/monitor
2021.12.21-16:43:02.446 info [caliper] [cli-launch-worker] Set benchmark configuration path: /home/ubuntu/caliper/caliper-benchmarks/monitor/config.yaml
2021.12.21-16:43:02.446 info [caliper] [cli-launch-worker] Set network configuration path: /home/ubuntu/caliper/caliper-benchmarks/monitor/network.yaml
2021.12.21-16:43:02.447 info [caliper] [cli-launch-worker] Set SUT type: fabric
2021.12.21-16:43:02.505 info [caliper] [worker-orchestrator] 2 workers connected, progressing to worker assignment phase.
2021.12.21-16:43:02.505 info [caliper] [worker-orchestrator] Workers currently unassigned, awaiting index assignment...
2021.12.21-16:43:02.506 info [caliper] [worker-orchestrator] Waiting for 2 workers to be assigned...
2021.12.21-16:43:02.559 info [caliper] [worker-orchestrator] 2 workers assigned, progressing to worker initialization phase.
2021.12.21-16:43:02.560 info [caliper] [worker-orchestrator] Waiting for 2 workers to be ready...
2021.12.21-16:43:03.629 info [caliper] [worker-message-handler] Initializing Worker#1...
2021.12.21-16:43:03.629 info [caliper] [fabric-connector] Initializing gateway connector compatible with installed SDK: 2.2.3
2021.12.21-16:43:03.629 info [caliper] [IdentityManager] Adding User1 (admin=false) as User1 for organization Org1MSP
2021.12.21-16:43:03.629 info [caliper] [worker-message-handler] Worker#1 initialized
2021.12.21-16:43:03.683 info [caliper] [worker-orchestrator] 2 workers ready, progressing to test preparation phase.
2021.12.21-16:43:03.684 info [caliper] [round-orchestrator] Started round 1 (Set)
2021.12.21-16:43:03.690 info [caliper] [worker-message-handler] Preparing Worker#1 for Round#0
2021.12.21-16:43:03.696 info [caliper] [connectors/v2/FabricGateway] Connecting user with identity User1 to a Network Gateway
2021.12.21-16:43:04.005 info [caliper] [worker-message-handler] Initializing Worker#0...
2021.12.21-16:43:04.005 info [caliper] [fabric-connector] Initializing gateway connector compatible with installed SDK: 2.2.3
2021.12.21-16:43:04.005 info [caliper] [IdentityManager] Adding User1 (admin=false) as User1 for organization Org1MSP
2021.12.21-16:43:04.005 info [caliper] [worker-message-handler] Worker#0 initialized
2021.12.21-16:43:04.006 info [caliper] [worker-message-handler] Preparing Worker#0 for Round#0
2021.12.21-16:43:04.006 info [caliper] [connectors/v2/FabricGateway] Connecting user with identity User1 to a Network Gateway
2021.12.21-16:43:04.007 info [caliper] [connectors/v2/FabricGateway] Successfully connected user with identity User1 to a Network Gateway
2021.12.21-16:43:04.008 info [caliper] [connectors/v2/FabricGateway] Generating contract map for user User1
2021.12.21-16:43:04.018 info [caliper] [connectors/v2/FabricGateway] Successfully connected user with identity User1 to a Network Gateway
2021.12.21-16:43:04.019 info [caliper] [connectors/v2/FabricGateway] Generating contract map for user User1
2021-12-21T16:43:07.083Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Committer- name: orderer.example.com:7050, url:grpc://localhost:7050, connected:false, connectAttempted:true
2021-12-21T16:43:07.086Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server orderer.example.com:7050 url:grpc://localhost:7050 timeout:3000
2021-12-21T16:43:07.088Z - error: [DiscoveryService]: _buildOrderer[channelall] - Unable to connect to the discovered orderer orderer.example.com:7050 due to Error: Failed to connect before the deadline on Committer- name: orderer.example.com:7050, url:grpc://localhost:7050, connected:false, connectAttempted:true
2021-12-21T16:43:07.085Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Committer- name: orderer.example.com:7050, url:grpc://localhost:7050, connected:false, connectAttempted:true
2021-12-21T16:43:07.090Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server orderer.example.com:7050 url:grpc://localhost:7050 timeout:3000
2021-12-21T16:43:07.092Z - error: [DiscoveryService]: _buildOrderer[channelall] - Unable to connect to the discovered orderer orderer.example.com:7050 due to Error: Failed to connect before the deadline on Committer- name: orderer.example.com:7050, url:grpc://localhost:7050, connected:false, connectAttempted:true
The second error that occurs in the writing test is the following:
2021.12.21-16:43:07.112 info [caliper] [worker-orchestrator] 2 workers prepared, progressing to test phase.
2021.12.21-16:43:07.112 info [caliper] [round-orchestrator] Monitors successfully started
2021.12.21-16:43:07.115 info [caliper] [worker-message-handler] Worker#1 is starting Round#0
2021.12.21-16:43:07.116 info [caliper] [worker-message-handler] Worker#0 is starting Round#0
2021.12.21-16:43:07.123 info [caliper] [caliper-worker] Worker #1 starting workload loop
2021.12.21-16:43:07.126 info [caliper] [caliper-worker] Worker #0 starting workload loop
2021.12.21-16:43:07.941 error [caliper] [connectors/v2/FabricGateway] Failed to perform submit transaction [set] using arguments [node1,{'CPU':50,'MEM':50,'STG':50.0,'DAT':'2020-11-17T00:10:00Z'}], with error: Error: No endorsement plan available
at DiscoveryHandler.endorse (/home/ubuntu/caliper/node_modules/fabric-network/node_modules/fabric-common/lib/DiscoveryHandler.js:208:10)
at process._tickCallback (internal/process/next_tick.js:68:7)
Connection File
---
name: fabric
version: 2.0.0
client:
organization: Org1
connection:
timeout:
peer:
endorser: '300'
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
certificateAuthorities:
- ca.org1.example.com
orderers:
orderer.example.com:
url: grpc://192.169.0.9:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
peers:
peer0.org1.example.com:
url: grpc://192.169.0.7:7051
tlsCACerts:
path: peerOrganizations/org1.example.com/tlsca/tlsca.org1.example.com-cert.pem
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
certificateAuthorities:
ca.org1.example.com:
url: https://192.169.0.7:7054
caName: ca-org1
tlsCACerts:
path: peerOrganizations/org1.example.com/tlsca/tlsca.org1.example.com-cert.pem
httpOptions:
verify: false
Network File
name: Fabric
version: '2.0.0'
caliper:
blockchain: fabric
sutOptions:
mutualTls: false
organizations:
- mspid: Org1MSP
identities:
certificates:
- name: 'User1'
clientPrivateKey:
path: 'peerOrganizations/org1.example.com/users/User1#org1.example.com/msp/keystore/priv_sk'
clientSignedCert:
path: 'peerOrganizations/org1.example.com/users/User1#org1.example.com/msp/signcerts/User1#org1.example.com-cert.pem'
connectionProfile:
path: 'connection_files/connection-org1.yaml'
discover: true
orderers:
orderer.example.com:
url: grpc://192.169.0.9:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
channels:
- channelName: channelall
contracts:
- id: monitor
I kindly ask for any tips so that I can solve these problems and follow the development.
From the network file you posted a couple of points
you can't define any nodes in it (for example you've added orderers). They are ignored
you've specified that your connection profile is a dynamic profile by setting discover to true in your network file, this means it will use discovery to determine the network topology and may not use the nodes you have explicitly defined in your connection profile. If you want to be explicit in your connection profile (and thus define a static connection profile) like you have in your above example, you should set discover to false, which hopefully will solve your problem.
As a side note, if you use discovery then the node-sdk (used by caliper) and caliper by default converts all discovered node hosts to localhost, which is why you see it trying to contact localhost. To disable this see Runtime settings in https://hyperledger.github.io/caliper/v0.4.2/fabric-config/new/
The issue was in the connection file.
The old file was:
---
name: fabric
version: 2.0.0
client:
organization: Org1
connection:
timeout:
peer:
endorser: '300'
orderer: '10000'
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
certificateAuthorities:
- ca.org1.example.com
orderers:
orderer.example.com:
url: grpc://192.169.0.9:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
peers:
peer0.org1.example.com:
url: grpc://192.169.0.7:7051
tlsCACerts:
path: crypto-config/peerOrganizations/org1.example.com/tlsca/tlsca.org1.example.com-cert.pem
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
certificateAuthorities:
ca.org1.example.com:
url: http://192.169.0.7:7054
caName: ca-org1
tlsCACerts:
path: crypto-config/peerOrganizations/org1.example.com/tlsca/tlsca.org1.example.com-cert.pem
httpOptions:
verify: false
The new connection file that I created is this:
---
name: fabric
description: "Sample connection profile for documentation topic"
version: 2.0.0
channels:
channelall:
orderers:
- orderer.example.com
peers:
peer0.org1.example.com:
endorsingPeer: true
chaincodeQuery: true
ledgerQuery: true
eventSource: true
peer0.org2.example.com:
endorsingPeer: true
chaincodeQuery: true
ledgerQuery: true
eventSource: true
peer0.org3.example.com:
endorsingPeer: false
chaincodeQuery: false
ledgerQuery: true
eventSource: true
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
Org2:
mspid: Org2MSP
peers:
- peer0.org2.example.com
Org3:
mspid: Org3MSP
peers:
- peer0.org3.example.com
orderers:
orderer.example.com:
url: grpc://192.169.0.9:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
peers:
peer0.org1.example.com:
url: grpc://192.169.0.7:7051
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
request-timeout: 120001
peer0.org2.example.com:
url: grpc://192.169.0.10:7051
grpcOptions:
ssl-target-name-override: peer0.org2.example.com
request-timeout: 120001
peer0.org3.example.com:
url: grpc://192.169.0.11:7051
grpcOptions:
ssl-target-name-override: peer0.org3.example.com
request-timeout: 120001
With all information about peers and orderers.
Thanks everybody for the help.
When I tried connecting webdriverIO with sauce labs, I'm getting error as
I'm using sauce service
$ ./node_modules/.bin/wdio ./config/int.wdio.conf.js
Execution of 1 spec files started at 2020-08-30T20:11:14.720Z
2020-08-30T20:11:14.768Z INFO #wdio/cli:launcher: Run onPrepare hook
2020-08-30T20:11:14.770Z INFO #wdio/cli:launcher: Run onWorkerStart hook
2020-08-30T20:11:14.772Z INFO #wdio/local-runner: Start worker 0-0 with arg: ./config/int.wdio.conf.js
[0-0] 2020-08-30T20:11:15.414Z INFO #wdio/local-runner: Run worker command: run
[0-0] 2020-08-30T20:11:15.422Z INFO webdriverio: Initiate new session using the ./protocol-stub protocol
[0-0] RUNNING in chrome - /e2e/test/login.int.spec.js
[0-0] 2020-08-30T20:11:15.704Z INFO webdriverio: Initiate new session using the webdriver protocol
[0-0] 2020-08-30T20:11:15.706Z INFO webdriver: [POST] https://ondemand.saucelabs.com/wd/hub/session
[0-0] 2020-08-30T20:11:15.706Z INFO webdriver: DATA {
capabilities: {
alwaysMatch: { browserName: 'chrome', acceptInsecureCerts: true },
firstMatch: [ {} ]
},
desiredCapabilities: { browserName: 'chrome', acceptInsecureCerts: true }
}
[0-0] 2020-08-30T20:11:15.733Z ERROR webdriver: RequestError: connect ECONNREFUSED 127.0.0.1:443
at ClientRequest.<anonymous> (/node_modules/got/dist/source/core/index.js:891:25)
at Object.onceWrapper (events.js:418:26)
at ClientRequest.emit (events.js:323:22)
at ClientRequest.EventEmitter.emit (domain.js:482:12)
at ClientRequest.origin.emit (/test/node_modules/#szmarczak/http-timer/dist/source/index.js:39:20)
at TLSSocket.socketErrorListener (_http_client.js:426:9)
at TLSSocket.emit (events.js:311:20)
at TLSSocket.EventEmitter.emit (domain.js:482:12)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1137:16)
[0-0] 2020-08-30T20:11:15.733Z ERROR #wdio/runner: Error: Failed to create session.
Unable to connect to "https://ondemand.saucelabs.com:443/wd/hub", make sure browser driver is running on that address.
If you use services like chromedriver see initialiseServices logs above or in wdio.log file as the service might had problems to start the driver.
at startWebDriverSession (/test/node_modules/webdriver/build/utils.js:45:11)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
[0-0] Error: Failed to create session.
Unable to connect to "https://ondemand.saucelabs.com:443/wd/hub", make sure browser driver is running on that address.
If you use services like chromedriver see initialiseServices logs above or in wdio.log file as the service might had problems to start the driver.
[0-0] FAILED in chrome - /e2e/test/login.int.spec.js
2020-08-30T20:11:15.848Z INFO #wdio/cli:launcher: Run onComplete hook
Spec Files: 0 passed, 1 failed, 1 total (100% completed) in 00:00:01
2020-08-30T20:11:15.849Z INFO #wdio/local-runner: Shutting down spawned worker
2020-08-30T20:11:16.102Z INFO #wdio/local-runner: Waiting for 0 to shut down gracefully
2020-08-30T20:11:16.102Z INFO #wdio/local-runner: shutting down
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
my wdio.config.js looks like
exports.config = {
runner: 'local',
user: 'test123',
key: '65434d10-276f-4305-adb4-93a39848d445',
region: 'us',
specs: [
'./e2e/test/*.int.spec.js'
],
exclude: [
'./e2e/test/*.prod.spec.js',
'./e2e/test/*.ie.int.spec.js'
],
maxInstances: 1,
capabilities: [
{
maxInstances: 1,
browserName: 'chrome',
acceptInsecureCerts: true
}
],
logLevel: 'info',
bail: 0,
baseUrl: 'https://www.google.com',
waitforTimeout: 10000,
connectionRetryTimeout: 120000,
connectionRetryCount: 3,
services: ['sauce'],
framework: 'mocha',
reporters: ['mochawesome'],
mochaOpts: {
ui: 'bdd',
timeout: 60000
}
}
any idea how to fix this...
I'm trying to run Test Set but it looks like it is stucked. All 5 agents are in use and I cannot delete them. Tests fail on ESOCKETTIMEDOUT. I could run same test without problems before.
I tried to click on "Send cancellation request" to Test Session in danger zone to kill it, but I still can't delete agents (Delete Botium Agent (only possible if not already used)).
Botium agents pic
Log from Botium:
2019-05-31T08:31:13.892Z: Job queued for execution
2019-05-31T08:31:13.886Z: 2019-05-31T08:31:13.378Z botium-box-worker-runtestcases Started processing, JobId #952.
2019-05-31T08:31:14.077Z: 2019-05-31T08:31:13.382Z botium-BotDriver Loaded Botium configuration file ./botium.json
2019-05-31T08:31:14.143Z: 2019-05-31T08:31:13.388Z botium-BotDriver BuildCompiler: Capabilites: { PROJECTNAME: 'TM new - Test Session',
TEMPDIR: 'botiumwork',
CLEANUPTEMPDIR: true,
WAITFORBOTTIMEOUT: 10000,
SIMULATE_WRITING_SPEED: false,
DOCKERCOMPOSEPATH: 'docker-compose',
DOCKERMACHINEPATH: 'docker-machine',
DOCKERMACHINE: false,
DOCKERIMAGE: 'node:boron',
DOCKERUNIQUECONTAINERNAMES: false,
DOCKERSYSLOGPORT_RANGE: '47100-47299',
BOT_HEALTH_STATUS: 200,
SLACK_PUBLISHPORT_RANGE: '46100-46299',
FACEBOOK_PUBLISHPORT_RANGE: '46300-46499',
FACEBOOK_SEND_DELIVERY_CONFIRMATION: true,
BOTFRAMEWORK_PUBLISHPORT_RANGE: '46500-46699',
BOTFRAMEWORK_WEBHOOK_PORT: 3978,
BOTFRAMEWORK_WEBHOOK_PATH: 'api/messages',
BOTFRAMEWORK_CHANNEL_ID: 'facebook',
SIMPLEREST_PING_RETRIES: 6,
SIMPLEREST_PING_TIMEOUT: 10000,
SIMPLEREST_PING_VERB: 'GET',
SIMPLEREST_METHOD: 'GET',
WEBSPEECH_SERVER_PORT: 46050,
WEBSPEECH_LANGUAGE: 'en-US',
WEBSPEECH_CLOSEBROWSER: true,
SCRIPTING_TXT_EOL: '\n',
SCRIPTING_XLSX_EOL_SPLIT: '\r',
SCRIPTING_XLSX_EOL_WRITE: '\r\n',
SCRIPTING_XLSX_STARTROW: 2,
SCRIPTING_XLSX_STARTCOL: 1,
SCRIPTING_NORMALIZE_TEXT: false,
SCRIPTING_ENABLE_MEMORY: false,
SCRIPTING_MATCHING_MODE: 'includeLowerCase',
SCRIPTING_UTTEXPANSION_MODE: 'all',
SCRIPTING_UTTEXPANSION_RANDOM_COUNT: 1,
SCRIPTING_MEMORYEXPANSION_KEEP_ORIG: false,
RETRY_USERSAYS_ONERROR_REGEXP: [],
RETRY_USERSAYS_NUMRETRIES: 1,
RETRY_USERSAYS_FACTOR: 1,
RETRY_USERSAYS_MINTIMEOUT: 1000,
ASSERTERS:
[ { ref: 'HASLINK',
src: 'botium-asserter-basiclink',
global: false,
args: null } ],
LOGIC_HOOKS: [],
USER_INPUTS: [],
CONTAINERMODE: 'webdriverio',
WEBDRIVERIO_URL: 'https://chat.t-mobile.cz/chat/',
WEBDRIVERIO_PROFILE: '',
WEBDRIVERIO_INPUT_ELEMENT: '<input />',
WEBDRIVERIO_INPUT_ELEMENT_VISIBLE_TIMEOUT: 10000,
WEBDRIVERIO_OUTPUT_ELEMENT:
"//div[#class='gaid-text-message gaid-text-message--isBot'][position()=last()-1]//p",
WEBDRIVERIO_IGNOREUPFRONTMESSAGES: false,
WEBDRIVERIO_USERNAME: '',
WEBDRIVERIO_PASSWORD: '',
WEBDRIVERIO_SCREENSHOTS: 'onstop',
FBPAGERECEIVER_REDISURL: { port: '6379', host: 'redis', db: 0, options: {} },
WEBDRIVERIO_OPTIONS:
{ desiredCapabilities: { browserName: 'chrome', name: 'TM new - Test Session' },
protocol: 'http',
host: '192.168.99.100',
port: '4444',
path: '/wd/hub' } }
2019-05-31T08:31:14.169Z: 2019-05-31T08:31:13.393Z botium-ScriptingProvider Using matching mode: includeLowerCase
2019-05-31T08:31:14.214Z: 2019-05-31T08:31:13.396Z botium-asserterUtils Loaded Default asserter - [ 'BUTTONS',
'MEDIA',
'PAUSE_ASSERTER',
'ENTITIES',
'ENTITY_VALUES',
'INTENT',
'INTENT_CONFIDENCE' ]
2019-05-31T08:31:14.251Z: 2019-05-31T08:31:13.402Z botium-asserterUtils Loaded Default logic hook - [ 'PAUSE',
'WAITFORBOT',
'SET_SCRIPTING_MEMORY',
'CLEAR_SCRIPTING_MEMORY',
'INCLUDE' ]
2019-05-31T08:31:14.339Z: 2019-05-31T08:31:13.403Z botium-asserterUtils Loaded Default user input - [ 'BUTTON', 'MEDIA', 'FORM' ]
2019-05-31T08:31:14.396Z: 2019-05-31T08:31:13.407Z botium-asserterUtils Trying to load HASLINK asserter from botium-asserter-basiclink
2019-05-31T08:31:14.433Z: 2019-05-31T08:31:13.410Z botium-asserterUtils Loaded HASLINK SUCCESSFULLY
2019-05-31T08:31:14.470Z: 2019-05-31T08:31:13.504Z botium-box-worker-runtestcases found 1 convos ...
2019-05-31T08:31:14.512Z: 2019-05-31T08:31:13.504Z botium-box-worker-runtestcases batchNum: 1 batchCount: 1 convosPerBatch: 1 batchStart: 0 batchEnd: 0 batchLength: 1
2019-05-31T08:31:14.548Z: 2019-05-31T08:31:13.507Z botium-BotDriver Build - Botium Core Version: 1.4.14
2019-05-31T08:31:14.586Z: 2019-05-31T08:31:13.510Z botium-BotDriver Build - Capabilites: { PROJECTNAME: 'TM new - Test Session',
TEMPDIR: 'botiumwork',
CLEANUPTEMPDIR: true,
WAITFORBOTTIMEOUT: 10000,
SIMULATE_WRITING_SPEED: false,
DOCKERCOMPOSEPATH: 'docker-compose',
DOCKERMACHINEPATH: 'docker-machine',
DOCKERMACHINE: false,
DOCKERIMAGE: 'node:boron',
DOCKERUNIQUECONTAINERNAMES: false,
DOCKERSYSLOGPORT_RANGE: '47100-47299',
BOT_HEALTH_STATUS: 200,
SLACK_PUBLISHPORT_RANGE: '46100-46299',
FACEBOOK_PUBLISHPORT_RANGE: '46300-46499',
FACEBOOK_SEND_DELIVERY_CONFIRMATION: true,
BOTFRAMEWORK_PUBLISHPORT_RANGE: '46500-46699',
BOTFRAMEWORK_WEBHOOK_PORT: 3978,
BOTFRAMEWORK_WEBHOOK_PATH: 'api/messages',
BOTFRAMEWORK_CHANNEL_ID: 'facebook',
SIMPLEREST_PING_RETRIES: 6,
SIMPLEREST_PING_TIMEOUT: 10000,
SIMPLEREST_PING_VERB: 'GET',
SIMPLEREST_METHOD: 'GET',
WEBSPEECH_SERVER_PORT: 46050,
WEBSPEECH_LANGUAGE: 'en-US',
WEBSPEECH_CLOSEBROWSER: true,
SCRIPTING_TXT_EOL: '\n',
SCRIPTING_XLSX_EOL_SPLIT: '\r',
SCRIPTING_XLSX_EOL_WRITE: '\r\n',
SCRIPTING_XLSX_STARTROW: 2,
SCRIPTING_XLSX_STARTCOL: 1,
SCRIPTING_NORMALIZE_TEXT: false,
SCRIPTING_ENABLE_MEMORY: false,
SCRIPTING_MATCHING_MODE: 'includeLowerCase',
SCRIPTING_UTTEXPANSION_MODE: 'all',
SCRIPTING_UTTEXPANSION_RANDOM_COUNT: 1,
SCRIPTING_MEMORYEXPANSION_KEEP_ORIG: false,
RETRY_USERSAYS_ONERROR_REGEXP: [],
RETRY_USERSAYS_NUMRETRIES: 1,
RETRY_USERSAYS_FACTOR: 1,
RETRY_USERSAYS_MINTIMEOUT: 1000,
ASSERTERS:
[ { ref: 'HASLINK',
src: 'botium-asserter-basiclink',
global: false,
args: null } ],
LOGIC_HOOKS: [],
USER_INPUTS: [],
CONTAINERMODE: 'webdriverio',
WEBDRIVERIO_URL: 'https://chat.t-mobile.cz/chat/',
WEBDRIVERIO_PROFILE: '',
WEBDRIVERIO_INPUT_ELEMENT: '<input />',
WEBDRIVERIO_INPUT_ELEMENT_VISIBLE_TIMEOUT: 10000,
WEBDRIVERIO_OUTPUT_ELEMENT:
"//div[#class='gaid-text-message gaid-text-message--isBot'][position()=last()-1]//p",
WEBDRIVERIO_IGNOREUPFRONTMESSAGES: false,
WEBDRIVERIO_USERNAME: '',
WEBDRIVERIO_PASSWORD: '',
WEBDRIVERIO_SCREENSHOTS: 'onstop',
FBPAGERECEIVER_REDISURL: { port: '6379', host: 'redis', db: 0, options: {} },
WEBDRIVERIO_OPTIONS:
{ desiredCapabilities: { browserName: 'chrome', name: 'TM new - Test Session' },
protocol: 'http',
host: '192.168.99.100',
port: '4444',
path: '/wd/hub' } }
2019-05-31T08:31:14.636Z: 2019-05-31T08:31:13.519Z botium-BotDriver Build - Sources : { LOCALPATH: '.',
GITPATH: 'git',
GITBRANCH: 'master',
GITDIR: '.' }
2019-05-31T08:31:14.671Z: 2019-05-31T08:31:13.524Z botium-BotDriver Build - Envs : { IS_BOTIUM_CONTAINER: true }
2019-05-31T08:31:14.704Z: 2019-05-31T08:31:13.592Z botium-PluginConnectorContainer Invalid Botium plugin loaded from webdriverio, expected PluginVersion, PluginClass fields
2019-05-31T08:31:14.732Z: 2019-05-31T08:31:13.595Z botium-PluginConnectorContainer Botium plugin botium-connector-webdriverio loaded
2019-05-31T08:31:14.769Z: 2019-05-31T08:31:13.597Z botium-connector-webdriverio Validate called
2019-05-31T08:31:14.801Z: 2019-05-31T08:31:13.600Z botium-connector-webdriverio Build called
2019-05-31T08:31:14.837Z: 2019-05-31T08:31:13.603Z botium-connector-webdriverio Start called
2019-05-31T08:31:24.389Z: 2019-05-31T08:31:24.371Z botium-box-worker sending heartbeat ...
2019-05-31T08:36:24.471Z: 2019-05-31T08:36:24.420Z botium-box-worker sending heartbeat ...
2019-05-31T08:37:15.925Z: 2019-05-31T08:37:15.880Z botium-box-worker-runtestcases Test Session Run failed (Error: ESOCKETTIMEDOUT), doing additional BotDriver Clean.
2019-05-31T08:37:15.961Z: 2019-05-31T08:37:15.881Z botium-connector-webdriverio Clean called
2019-05-31T08:40:02.054Z: 2019-05-31T08:40:02.006Z botium-BaseContainer Cleanup rimrafing temp dir /app/agent/botiumwork/TM-new-Test-Session-20190531-083113-vI4Bx
2019-05-31T08:40:02.357Z: Job failed: Error: ESOCKETTIMEDOUT
Selenium hub log:
08:06:36.629 INFO [GridLauncherV3.parse] - Selenium server version: 3.141.59, revision: e82be7d358
08:06:36.849 INFO [GridLauncherV3.lambda$buildLaunchers$5] - Launching Selenium Grid hub on port 4444
2019-05-31 08:06:37.333:INFO::main: Logging initialized #1175ms to org.seleniumhq.jetty9.util.log.StdErrLog
08:06:38.033 INFO [Hub.start] - Selenium Grid hub is up and running
08:06:38.040 INFO [Hub.start] - Nodes should register to http://172.19.0.4:4444/grid/register/
08:06:38.040 INFO [Hub.start] - Clients should connect to http://172.19.0.4:4444/wd/hub
08:06:40.894 INFO [DefaultGridRegistry.add] - Registered a node http://172.19.0.3:5555
08:06:40.907 INFO [DefaultGridRegistry.add] - Registered a node http://172.19.0.2:5555
08:07:47.391 INFO [RequestHandler.process] - Got a request to create a new session: Capabilities {browserName: firefox, handlesAlerts: true, javascriptEnabled: true, locationContextEnabled: true, loggingPrefs: org.openqa.selenium.logging..., name: TM new - Test Session, requestOrigins: {name: webdriverio, url: http://webdriver.io, version: 4.14.4}, rotatable: true}
08:07:47.409 INFO [TestSlot.getNewSession] - Trying to create a new session on test slot {server:CONFIG_UUID=ad8a2987-e350-456e-b9cf-25ac008d5255, seleniumProtocol=WebDriver, browserName=firefox, maxInstances=1, moz:firefoxOptions={log={level=info}}, platformName=LINUX, version=67.0, applicationName=, platform=LINUX}
08:13:58.927 INFO [RequestHandler.process] - Got a request to create a new session: Capabilities {browserName: chrome, handlesAlerts: true, javascriptEnabled: true, locationContextEnabled: true, loggingPrefs: org.openqa.selenium.logging..., name: TM new - Test Session, requestOrigins: {name: webdriverio, url: http://webdriver.io, version: 4.14.4}, rotatable: true}
08:13:58.935 INFO [TestSlot.getNewSession] - Trying to create a new session on test slot {server:CONFIG_UUID=3f83f707-e0ad-406f-9081-bc7185515bdf, seleniumProtocol=WebDriver, browserName=chrome, maxInstances=1, platformName=LINUX, version=74.0.3729.169, applicationName=, platform=LINUX}
08:31:13.686 INFO [RequestHandler.process] - Got a request to create a new session: Capabilities {browserName: chrome, handlesAlerts: true, javascriptEnabled: true, locationContextEnabled: true, loggingPrefs: org.openqa.selenium.logging..., name: TM new - Test Session, requestOrigins: {name: webdriverio, url: http://webdriver.io, version: 4.14.4}, rotatable: true}
08:31:13.697 INFO [TestSlot.getNewSession] - Trying to create a new session on test slot {server:CONFIG_UUID=3f83f707-e0ad-406f-9081-bc7185515bdf, seleniumProtocol=WebDriver, browserName=chrome, maxInstances=1, platformName=LINUX, version=74.0.3729.169, applicationName=, platform=LINUX}
08:39:59.952 WARN [RequestHandler.process] - The client is gone for session ext. key b54b779b8d4cb90133cf3386ca7ef664, terminating
08:40:02.245 INFO [RequestHandler.process] - Got a request to create a new session: Capabilities {browserName: firefox, handlesAlerts: true, javascriptEnabled: true, locationContextEnabled: true, loggingPrefs: org.openqa.selenium.logging..., name: TM new - Test Session, requestOrigins: {name: webdriverio, url: http://webdriver.io, version: 4.14.4}, rotatable: true}
08:40:02.251 INFO [TestSlot.getNewSession] - Trying to create a new session on test slot {server:CONFIG_UUID=ad8a2987-e350-456e-b9cf-25ac008d5255, seleniumProtocol=WebDriver, browserName=firefox, maxInstances=1, moz:firefoxOptions={log={level=info}}, platformName=LINUX, version=67.0, applicationName=, platform=LINUX}
IP & PORTS
You can access this container using the following IP address and port:
DOCKER PORT ACCESS URL
Deleting the agent records in Botium Box doesn't help - this is just how Botium Box keeps tracks of the connected agents, it has no influence on the actual processes.
The logs you attached are not looking bad, it's just that obviously there is a problem when connecting to the Selenium hub. In case the agent processes are really stuck or crashed, you can just restart the docker containers to bring them up again.
Running kafka connect in our docker-swarm, with the following compose file:
cp-kafka-connect-node:
image: confluentinc/cp-kafka-connect:5.1.0
ports:
- 28085:28085
secrets:
- kafka.truststore.jks
- source: kafka-connect-aws-credentials
target: /root/.aws/credentials
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka01:9093,kafka02:9093,kafka03:9093
CONNECT_LOG4J_ROOT_LEVEL: TRACE
CONNECT_REST_PORT: 28085
CONNECT_GROUP_ID: cp-kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: dev_cp-kafka-connect-config
CONNECT_OFFSET_STORAGE_TOPIC: dev_cp-kafka-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: dev_cp-kafka-connect-status
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: 'false'
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: 'false'
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_ADVERTISED_HOST_NAME: localhost
CONNECT_PLUGIN_PATH: /usr/share/java/
CONNECT_SECURITY_PROTOCOL: SSL
CONNECT_SSL_TRUSTSTORE_LOCATION: /run/secrets/kafka.truststore.jks
CONNECT_SSL_TRUSTSTORE_PASSWORD: ********
KAFKA_HEAP_OPTS: '-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2'
deploy:
replicas: 1
resources:
limits:
cpus: '0.50'
memory: 4gb
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 2000s
secrets:
kafka.truststore.jks:
external: true
kafka-connect-aws-credentials:
external: true
The kafka connect node starts up successfully, and I am able to set up tasks and view the status of those tasks...
The connector I setup I called kafka-sink, I created it with the following config:
"config": {
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"s3.region": "eu-central-1",
"flush.size": "1",
"schema.compatibility": "NONE",
"tasks.max": "1",
"topics": "input-topic-name",
"s3.part.size": "5242880",
"timezone": "UTC",
"directory.delim": "/",
"locale": "UK",
"s3.compression.type": "gzip",
"format.class": "io.confluent.connect.s3.format.bytearray.ByteArrayFormat",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"name": "kafka-sink",
"value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"s3.bucket.name": "my-s3-bucket",
"rotate.schedule.interval.ms": "60000"
}
This task now says that it is running.
When I did not include the SSL config, specifically:
CONNECT_BOOTSTRAP_SERVERS: kafka01:9093,kafka02:9093,kafka03:9093
CONNECT_SECURITY_PROTOCOL: SSL
CONNECT_SSL_TRUSTSTORE_LOCATION: /run/secrets/kafka.truststore.jks
CONNECT_SSL_TRUSTSTORE_PASSWORD: ********
and instead pointed to a bootstrap server that was exposed with no security:
CONNECT_BOOTSTRAP_SERVERS: insecurekafka:9092
It worked fine, and read from the appropriate input topic, and output to the S3 bucket with default partitioning...
However, when I run it using the SSL config against my secure kafka topic, it logs no errors, throws no exceptions, but does nothing at all despite data continuously being pushed to the input topic...
Am I doing something wrong?
This is my first time using Kafka Connect, normally, I connect to kafka using Spring Boot apps where you just have to specify the truststore location and password in the config.
Am I missing some configuration in either my compose file or my task config?
I think you need to add SSL config for both consumer and producer. Check here Kafka Connect Encrypt with SSL
Something like this
security.protocol=SSL
ssl.truststore.location=~/kafka.truststore.jks
ssl.truststore.password=<password>
ssl.keystore.location=~/kafka.client.keystore.jks
ssl.keystore.password=<password>
ssl.key.password=<password>
producer.security.protocol=SSL
producer.ssl.truststore.location=~/kafka.truststore.jks
producer.ssl.truststore.password=<password>
producer.ssl.keystore.location=~/kafka.client.keystore.jks
producer.ssl.keystore.password=<password>
producer.ssl.key.password=<password>
I am trying to create a 3 node cluster on RabbitMQ. I have the first node up and running. When I issue join cluster command from node 2, it is throwing an error that node is down.
rabbitmqctl join_cluster rabbit#hostname02
I am getting the following error:
Status of node rabbit#hostname02 ...
Error: unable to connect to node rabbit#hostname02: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#hostname02]
rabbit#hostname02:
* connected to epmd (port 4369) on hostname02
* epmd reports: node 'rabbit' not running at all
no other nodes on hostname02
* suggestion: start the node
current node details:
- node name: 'rabbitmq-cli-30#hostname02'
- home dir: /var/lib/rabbitmq
- cookie hash: bygafwoj/ISgb3yKej1pEg==
This is my config file.
[
{rabbit, [
{cluster_nodes, {[rabbit#hostname01, rabbitmq#hostname02, rabbit#hostname03], disc}},
{cluster_partition_handling, ignore},
{tcp_listen_options,
[binary,
{packet, raw},
{reuseaddr, true},
{backlog, 128},
{nodelay, true},
{exit_on_close, false}]
},
{default_user, <<"guest">>},
{default_pass, <<"guest">>},
{log_levels, [{autocluster, debug}, {connection, info}]}
]},
{kernel, [
]},
{rabbitmq_management, [
{listener, [
{port, 15672}
]}
]}
].
% EOF
I have updated the /etc/hosts file with the details of all 3 nodes on all the 3 servers. I am not sure where I am getting this wrong.