Error using framework hyperledger caliper - testing

I have two test tests, one reading and one writing on the blockchain. I'm getting two different errors, one at the start of the test and one at the writing test. The reading test is working normally without problems.
Initial error:
2021.12.21-16:43:01.870 info [caliper] [round-orchestrator] Preparing worker connections
2021.12.21-16:43:01.870 info [caliper] [worker-orchestrator] Launching worker 1 of 2
2021.12.21-16:43:01.878 info [caliper] [worker-orchestrator] Launching worker 2 of 2
2021.12.21-16:43:01.884 info [caliper] [worker-orchestrator] Messenger not configured, entering configure phase...
2021.12.21-16:43:01.885 info [caliper] [worker-orchestrator] No existing workers detected, entering worker launch phase...
2021.12.21-16:43:01.885 info [caliper] [worker-orchestrator] Waiting for 2 workers to be connected...
2021.12.21-16:43:02.426 info [caliper] [cli-launch-worker] Set workspace path: /home/ubuntu/caliper/caliper-benchmarks/monitor
2021.12.21-16:43:02.427 info [caliper] [cli-launch-worker] Set benchmark configuration path: /home/ubuntu/caliper/caliper-benchmarks/monitor/config.yaml
2021.12.21-16:43:02.427 info [caliper] [cli-launch-worker] Set network configuration path: /home/ubuntu/caliper/caliper-benchmarks/monitor/network.yaml
2021.12.21-16:43:02.427 info [caliper] [cli-launch-worker] Set SUT type: fabric
2021.12.21-16:43:02.444 info [caliper] [cli-launch-worker] Set workspace path: /home/ubuntu/caliper/caliper-benchmarks/monitor
2021.12.21-16:43:02.446 info [caliper] [cli-launch-worker] Set benchmark configuration path: /home/ubuntu/caliper/caliper-benchmarks/monitor/config.yaml
2021.12.21-16:43:02.446 info [caliper] [cli-launch-worker] Set network configuration path: /home/ubuntu/caliper/caliper-benchmarks/monitor/network.yaml
2021.12.21-16:43:02.447 info [caliper] [cli-launch-worker] Set SUT type: fabric
2021.12.21-16:43:02.505 info [caliper] [worker-orchestrator] 2 workers connected, progressing to worker assignment phase.
2021.12.21-16:43:02.505 info [caliper] [worker-orchestrator] Workers currently unassigned, awaiting index assignment...
2021.12.21-16:43:02.506 info [caliper] [worker-orchestrator] Waiting for 2 workers to be assigned...
2021.12.21-16:43:02.559 info [caliper] [worker-orchestrator] 2 workers assigned, progressing to worker initialization phase.
2021.12.21-16:43:02.560 info [caliper] [worker-orchestrator] Waiting for 2 workers to be ready...
2021.12.21-16:43:03.629 info [caliper] [worker-message-handler] Initializing Worker#1...
2021.12.21-16:43:03.629 info [caliper] [fabric-connector] Initializing gateway connector compatible with installed SDK: 2.2.3
2021.12.21-16:43:03.629 info [caliper] [IdentityManager] Adding User1 (admin=false) as User1 for organization Org1MSP
2021.12.21-16:43:03.629 info [caliper] [worker-message-handler] Worker#1 initialized
2021.12.21-16:43:03.683 info [caliper] [worker-orchestrator] 2 workers ready, progressing to test preparation phase.
2021.12.21-16:43:03.684 info [caliper] [round-orchestrator] Started round 1 (Set)
2021.12.21-16:43:03.690 info [caliper] [worker-message-handler] Preparing Worker#1 for Round#0
2021.12.21-16:43:03.696 info [caliper] [connectors/v2/FabricGateway] Connecting user with identity User1 to a Network Gateway
2021.12.21-16:43:04.005 info [caliper] [worker-message-handler] Initializing Worker#0...
2021.12.21-16:43:04.005 info [caliper] [fabric-connector] Initializing gateway connector compatible with installed SDK: 2.2.3
2021.12.21-16:43:04.005 info [caliper] [IdentityManager] Adding User1 (admin=false) as User1 for organization Org1MSP
2021.12.21-16:43:04.005 info [caliper] [worker-message-handler] Worker#0 initialized
2021.12.21-16:43:04.006 info [caliper] [worker-message-handler] Preparing Worker#0 for Round#0
2021.12.21-16:43:04.006 info [caliper] [connectors/v2/FabricGateway] Connecting user with identity User1 to a Network Gateway
2021.12.21-16:43:04.007 info [caliper] [connectors/v2/FabricGateway] Successfully connected user with identity User1 to a Network Gateway
2021.12.21-16:43:04.008 info [caliper] [connectors/v2/FabricGateway] Generating contract map for user User1
2021.12.21-16:43:04.018 info [caliper] [connectors/v2/FabricGateway] Successfully connected user with identity User1 to a Network Gateway
2021.12.21-16:43:04.019 info [caliper] [connectors/v2/FabricGateway] Generating contract map for user User1
2021-12-21T16:43:07.083Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Committer- name: orderer.example.com:7050, url:grpc://localhost:7050, connected:false, connectAttempted:true
2021-12-21T16:43:07.086Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server orderer.example.com:7050 url:grpc://localhost:7050 timeout:3000
2021-12-21T16:43:07.088Z - error: [DiscoveryService]: _buildOrderer[channelall] - Unable to connect to the discovered orderer orderer.example.com:7050 due to Error: Failed to connect before the deadline on Committer- name: orderer.example.com:7050, url:grpc://localhost:7050, connected:false, connectAttempted:true
2021-12-21T16:43:07.085Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Committer- name: orderer.example.com:7050, url:grpc://localhost:7050, connected:false, connectAttempted:true
2021-12-21T16:43:07.090Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server orderer.example.com:7050 url:grpc://localhost:7050 timeout:3000
2021-12-21T16:43:07.092Z - error: [DiscoveryService]: _buildOrderer[channelall] - Unable to connect to the discovered orderer orderer.example.com:7050 due to Error: Failed to connect before the deadline on Committer- name: orderer.example.com:7050, url:grpc://localhost:7050, connected:false, connectAttempted:true
The second error that occurs in the writing test is the following:
2021.12.21-16:43:07.112 info [caliper] [worker-orchestrator] 2 workers prepared, progressing to test phase.
2021.12.21-16:43:07.112 info [caliper] [round-orchestrator] Monitors successfully started
2021.12.21-16:43:07.115 info [caliper] [worker-message-handler] Worker#1 is starting Round#0
2021.12.21-16:43:07.116 info [caliper] [worker-message-handler] Worker#0 is starting Round#0
2021.12.21-16:43:07.123 info [caliper] [caliper-worker] Worker #1 starting workload loop
2021.12.21-16:43:07.126 info [caliper] [caliper-worker] Worker #0 starting workload loop
2021.12.21-16:43:07.941 error [caliper] [connectors/v2/FabricGateway] Failed to perform submit transaction [set] using arguments [node1,{'CPU':50,'MEM':50,'STG':50.0,'DAT':'2020-11-17T00:10:00Z'}], with error: Error: No endorsement plan available
at DiscoveryHandler.endorse (/home/ubuntu/caliper/node_modules/fabric-network/node_modules/fabric-common/lib/DiscoveryHandler.js:208:10)
at process._tickCallback (internal/process/next_tick.js:68:7)
Connection File
---
name: fabric
version: 2.0.0
client:
organization: Org1
connection:
timeout:
peer:
endorser: '300'
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
certificateAuthorities:
- ca.org1.example.com
orderers:
orderer.example.com:
url: grpc://192.169.0.9:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
peers:
peer0.org1.example.com:
url: grpc://192.169.0.7:7051
tlsCACerts:
path: peerOrganizations/org1.example.com/tlsca/tlsca.org1.example.com-cert.pem
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
certificateAuthorities:
ca.org1.example.com:
url: https://192.169.0.7:7054
caName: ca-org1
tlsCACerts:
path: peerOrganizations/org1.example.com/tlsca/tlsca.org1.example.com-cert.pem
httpOptions:
verify: false
Network File
name: Fabric
version: '2.0.0'
caliper:
blockchain: fabric
sutOptions:
mutualTls: false
organizations:
- mspid: Org1MSP
identities:
certificates:
- name: 'User1'
clientPrivateKey:
path: 'peerOrganizations/org1.example.com/users/User1#org1.example.com/msp/keystore/priv_sk'
clientSignedCert:
path: 'peerOrganizations/org1.example.com/users/User1#org1.example.com/msp/signcerts/User1#org1.example.com-cert.pem'
connectionProfile:
path: 'connection_files/connection-org1.yaml'
discover: true
orderers:
orderer.example.com:
url: grpc://192.169.0.9:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
channels:
- channelName: channelall
contracts:
- id: monitor
I kindly ask for any tips so that I can solve these problems and follow the development.

From the network file you posted a couple of points
you can't define any nodes in it (for example you've added orderers). They are ignored
you've specified that your connection profile is a dynamic profile by setting discover to true in your network file, this means it will use discovery to determine the network topology and may not use the nodes you have explicitly defined in your connection profile. If you want to be explicit in your connection profile (and thus define a static connection profile) like you have in your above example, you should set discover to false, which hopefully will solve your problem.
As a side note, if you use discovery then the node-sdk (used by caliper) and caliper by default converts all discovered node hosts to localhost, which is why you see it trying to contact localhost. To disable this see Runtime settings in https://hyperledger.github.io/caliper/v0.4.2/fabric-config/new/

The issue was in the connection file.
The old file was:
---
name: fabric
version: 2.0.0
client:
organization: Org1
connection:
timeout:
peer:
endorser: '300'
orderer: '10000'
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
certificateAuthorities:
- ca.org1.example.com
orderers:
orderer.example.com:
url: grpc://192.169.0.9:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
peers:
peer0.org1.example.com:
url: grpc://192.169.0.7:7051
tlsCACerts:
path: crypto-config/peerOrganizations/org1.example.com/tlsca/tlsca.org1.example.com-cert.pem
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
certificateAuthorities:
ca.org1.example.com:
url: http://192.169.0.7:7054
caName: ca-org1
tlsCACerts:
path: crypto-config/peerOrganizations/org1.example.com/tlsca/tlsca.org1.example.com-cert.pem
httpOptions:
verify: false
The new connection file that I created is this:
---
name: fabric
description: "Sample connection profile for documentation topic"
version: 2.0.0
channels:
channelall:
orderers:
- orderer.example.com
peers:
peer0.org1.example.com:
endorsingPeer: true
chaincodeQuery: true
ledgerQuery: true
eventSource: true
peer0.org2.example.com:
endorsingPeer: true
chaincodeQuery: true
ledgerQuery: true
eventSource: true
peer0.org3.example.com:
endorsingPeer: false
chaincodeQuery: false
ledgerQuery: true
eventSource: true
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
Org2:
mspid: Org2MSP
peers:
- peer0.org2.example.com
Org3:
mspid: Org3MSP
peers:
- peer0.org3.example.com
orderers:
orderer.example.com:
url: grpc://192.169.0.9:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
peers:
peer0.org1.example.com:
url: grpc://192.169.0.7:7051
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
request-timeout: 120001
peer0.org2.example.com:
url: grpc://192.169.0.10:7051
grpcOptions:
ssl-target-name-override: peer0.org2.example.com
request-timeout: 120001
peer0.org3.example.com:
url: grpc://192.169.0.11:7051
grpcOptions:
ssl-target-name-override: peer0.org3.example.com
request-timeout: 120001
With all information about peers and orderers.
Thanks everybody for the help.

Related

Serverless: TypeError: Cannot read property 'stage' of undefined

frameworkVersion: '2'
plugins:
- serverless-step-functions
- serverless-python-requirements
- serverless-parameters
- serverless-pseudo-parameters
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'}
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221
package:
exclude:
- node_modules/**
- venv/**
# Lambda functions
functions:
generateAlert:
handler: handler.generateAlert
generateData:
handler: handler.generateDataHandler
timeout: 600
approveDenied:
handler: handler.approveDenied
timeout: 600
stepFunctions:
stateMachines:
"claims-etl-and-insight-generation-${self:provider.stage}":
loggingConfig:
level: ALL
includeExecutionData: true
destinations:
- Fn::GetAtt: ["ETLStepFunctionLogGroup", Arn]
name: "claims-etl-and-insight-generation-${self:provider.stage}"
definition:
Comment: "${self:provider.stage} ETL Workflow"
StartAt: RawQualityJob
States:
# Raw Data Quality Check Job Start
RawQualityJob:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: "data_quality_v2_${self:provider.stage}"
Arguments:
"--workflow-name": "${self:provider.stage}-Workflow"
"--dataset_id.$": "$.datasetId"
"--client_id.$": "$.clientId"
Next: DataQualityChoice
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertDataQuality
# End Raw Data Quality Check Job
DataQualityChoice:
Type: Task
Resource:
Fn::GetAtt: [approveDenied, Arn]
Next: Is Approved ?
Is Approved ?:
Type: Choice
Choices:
- Variable: "$.quality_status"
StringEquals: "Denied"
Next: FailState
Default: HeaderLineJob
FailState:
Type: Fail
Cause: "Denied status"
# Header Line Job Start
HeaderLineJob:
Type: Parallel
Branches:
- StartAt: HeaderLineIngestion
States:
HeaderLineIngestion:
Type: Task
Resource: arn:aws:states:::glue:startJobRun.sync
Parameters:
JobName: headers_lines_etl_rs_v2
Arguments:
"--workflow-name.$": "$.Arguments.--workflow-name"
"--dataset_id.$": "$.Arguments.--dataset_id"
"--client_id.$": "$.Arguments.--client_id"
End: True
Retry:
- ErrorEquals: [States.ALL]
MaxAttempts: 2
IntervalSeconds: 10
BackoffRate: 5
Catch:
- ErrorEquals: [States.ALL]
Next: GenerateErrorAlertHeaderLine
End: True
# Header Line Job End
GenerateErrorAlertDataQuality:
Type: Task
Resource:
Fn::GetAtt: [generateAlert, Arn]
End: true
resources:
Resources:
# Cloudwatch Log
"ETLStepFunctionLogGroup":
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: "ETLStepFunctionLogGroup_${self:provider.stage}"
This is what my serverless.yml file looks like.
When I run the command:
sls deploy --stage staging
It show
Type Error ----------------------------------------------
TypeError: Cannot read property 'stage' of undefined
at Variables.getValueFromOptions (/snapshot/serverless/lib/classes/Variables.js:648:37)
at Variables.getValueFromSource (/snapshot/serverless/lib/classes/Variables.js:579:17)
at /snapshot/serverless/lib/classes/Variables.js:539:12
Your Environment Information ---------------------------
Operating System: linux
Node Version: 14.4.0
Framework Version: 2.30.3 (standalone)
Plugin Version: 4.5.1
SDK Version: 4.2.0
Components Version: 3.7.4
How I can fix this? I tried with different version of serverless.
There is error in yamlParser file, which is provided by serverless-step-functions.
Above is my serverless config file.
It looks like a $ sign is missing from your provider -> stage?
provider:
name: aws
region: us-east-2
stage: ${opt:stage, 'dev'} # $ sign is missing?
runtime: python3.7
versionFunctions: false
iam:
role: arn:aws:iam::#{AWS::AccountId}:role/AWSLambdaVPCAccessExecutionRole
apiGateway:
shouldStartNameWithService: true
lambdaHashingVersion: 20201221

K3s Vault Cluster -- http: server gave HTTP response to HTTPS client

I am trying to setup a 3 node vault cluster with raft storage enabled. I am currently at a loss to why the readiness probe (also the liveness probe) is returning
Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
I am using helm 3 for 'helm install vault hashicorp/vault --namespace vault -f override-values.yaml'
global:
enabled: true
tlsDisable: false
injector:
enabled: false
server:
image:
repository: "hashicorp/vault"
tag: "1.5.5"
resources:
requests:
memory: 1Gi
cpu: 2000m
limits:
memory: 2Gi
cpu: 2000m
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
livenessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true"
initialDelaySeconds: 60
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
# extraVolumes is a list of extra volumes to mount. These will be exposed
# to Vault in the path `/vault/userconfig/<name>/`.
extraVolumes:
# holds the cert file and the key file
- type: secret
name: tls-server
# holds the ca certificate
- type: secret
name: tls-ca
auditStorage:
enabled: true
standalone:
enabled: false
# Run Vault in "HA" mode.
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/tls-server/tls.crt"
tls_key_file = "/vault/userconfig/tls-server/tls.key"
tls_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
}
storage "raft" {
path = "/vault/data"
retry_join {
leader_api_addr = "https://vault-0.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-1.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
retry_join {
leader_api_addr = "https://vault-2.vault-internal:8200"
leader_ca_cert_file = "/vault/userconfig/tls-ca/ca.crt"
leader_client_cert_file = "/vault/userconfig/tls-server/tls.crt"
leader_client_key_file = "/vault/userconfig/tls-server/tls.key"
}
}
service_registration "kubernetes" {}
# Vault UI
ui:
enabled: true
serviceType: "ClusterIP"
serviceNodePort: null
externalPort: 8200
Return from describe pod vault-0
Name: vault-0
Namespace: vault
Priority: 0
Node: node4/10.211.55.7
Start Time: Wed, 11 Nov 2020 15:06:47 +0700
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/name=vault
component=server
controller-revision-hash=vault-5c4b47bdc4
helm.sh/chart=vault-0.8.0
statefulset.kubernetes.io/pod-name=vault-0
vault-active=false
vault-initialized=false
vault-perf-standby=false
vault-sealed=true
vault-version=1.5.5
Annotations: <none>
Status: Running
IP: 10.42.4.82
IPs:
IP: 10.42.4.82
Controlled By: StatefulSet/vault
Containers:
vault:
Container ID: containerd://6dfde76051f44c22003cc02a880593792d304e74c56d717eef982e0e799672f2
Image: hashicorp/vault:1.5.5
Image ID: docker.io/hashicorp/vault#sha256:90cfeead29ef89fdf04383df9991754f4a54c43b2fb49ba9ff3feb713e5ef1be
Ports: 8200/TCP, 8201/TCP, 8202/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/sh
-ec
Args:
cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
[ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageconfig.hcl;
[ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storageconfig.hcl;
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl
State: Running
Started: Wed, 11 Nov 2020 15:25:21 +0700
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 11 Nov 2020 15:19:10 +0700
Finished: Wed, 11 Nov 2020 15:20:20 +0700
Ready: False
Restart Count: 8
Limits:
cpu: 2
memory: 2Gi
Requests:
cpu: 2
memory: 1Gi
Liveness: http-get https://:8200/v1/sys/health%3Fstandbyok=true delay=60s timeout=3s period=5s #success=1 #failure=2
Readiness: http-get https://:8200/v1/sys/health%3Fstandbyok=true&sealedcode=204&uninitcode=204 delay=5s timeout=3s period=5s #success=1 #failure=2
Environment:
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_K8S_POD_NAME: vault-0 (v1:metadata.name)
VAULT_K8S_NAMESPACE: vault (v1:metadata.namespace)
VAULT_ADDR: https://127.0.0.1:8200
VAULT_API_ADDR: https://$(POD_IP):8200
SKIP_CHOWN: true
SKIP_SETCAP: true
HOSTNAME: vault-0 (v1:metadata.name)
VAULT_CLUSTER_ADDR: https://$(HOSTNAME).vault-internal:8201
VAULT_RAFT_NODE_ID: vault-0 (v1:metadata.name)
HOME: /home/vault
VAULT_CACERT: /vault/userconfig/tls-ca/ca.crt
Mounts:
/home/vault from home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from vault-token-lfgnj (ro)
/vault/audit from audit (rw)
/vault/config from config (rw)
/vault/data from data (rw)
/vault/userconfig/tls-ca from userconfig-tls-ca (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-vault-0
ReadOnly: false
audit:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: audit-vault-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-config
Optional: false
userconfig-tls-ca:
Type: Secret (a volume populated by a Secret)
SecretName: tls-ca
Optional: false
home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
vault-token-lfgnj:
Type: Secret (a volume populated by a Secret)
SecretName: vault-token-lfgnj
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned vault/vault-0 to node4
Warning Unhealthy 17m (x2 over 17m) kubelet Liveness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true": http: server gave HTTP response to HTTPS client
Normal Killing 17m kubelet Container vault failed liveness probe, will be restarted
Normal Pulled 17m (x2 over 18m) kubelet Container image "hashicorp/vault:1.5.5" already present on machine
Normal Created 17m (x2 over 18m) kubelet Created container vault
Normal Started 17m (x2 over 18m) kubelet Started container vault
Warning Unhealthy 13m (x56 over 18m) kubelet Readiness probe failed: Get "https://10.42.4.82:8200/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204": http: server gave HTTP response to HTTPS client
Warning BackOff 3m41s (x31 over 11m) kubelet Back-off restarting failed container
Logs from vault-0
2020-11-12T05:50:43.554426582Z ==> Vault server configuration:
2020-11-12T05:50:43.554524646Z
2020-11-12T05:50:43.554574639Z Api Address: https://10.42.4.85:8200
2020-11-12T05:50:43.554586234Z Cgo: disabled
2020-11-12T05:50:43.554596948Z Cluster Address: https://vault-0.vault-internal:8201
2020-11-12T05:50:43.554608637Z Go Version: go1.14.7
2020-11-12T05:50:43.554678454Z Listener 1: tcp (addr: "[::]:8200", cluster address: "[::]:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
2020-11-12T05:50:43.554693734Z Log Level: info
2020-11-12T05:50:43.554703897Z Mlock: supported: true, enabled: false
2020-11-12T05:50:43.554713272Z Recovery Mode: false
2020-11-12T05:50:43.554722579Z Storage: raft (HA available)
2020-11-12T05:50:43.554732788Z Version: Vault v1.5.5
2020-11-12T05:50:43.554769315Z Version Sha: f5d1ddb3750e7c28e25036e1ef26a4c02379fc01
2020-11-12T05:50:43.554780425Z
2020-11-12T05:50:43.672225223Z ==> Vault server started! Log data will stream in below:
2020-11-12T05:50:43.672519986Z
2020-11-12T05:50:43.673078706Z 2020-11-12T05:50:43.543Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy=
2020-11-12T05:51:57.838970945Z ==> Vault shutdown triggered
I am running a 6 node rancher k3s cluster v1.19.3ks2 on my mac.
Any help would be appreciated

Hyperledger Fabric SDK not starting TLS handshake

I'm trying to get a small golang application to connect to a hyperledger fabric network. The network is based on one of the official hyperledger-fabric samples, called 'first-network'. It is started by their 'byfn.sh' script and runs a functioning end-2-end test. The test executes commands directly using the 'cli' container that has all the valid crypto material.
I, however, try to do query or create a Tx using the fabric-sdk-go. I created a connection profile based on the official documentation and samples I found online.
sdk, err := fabsdk.New(config.FromFile("../integrity-network/connection-profile.yaml"))
...
clientChannelContext := sdk.ChannelContext("integrity-channel", fabsdk.WithUser("Admin#org1.example.com"), fabsdk.WithOrg("Org1"))
Reading the profile and creating the SDK instance works, however creation of the channel context fails and peer0 of org1 tells me: first record does not look like a TLS handshake
I'm a bit confused about the crypto material I have to provide in the connection profile, but based on examples online I think it should be correct:
x-type: "hlfv1"
description: "Connection profile for our integrity network."
version: "1.0"
client:
organization: org1
logging:
level: debug
cryptoconfig:
path: ../integrity-network/crypto-config/
credentialStore:
path: "/tmp/state-store"
cryptoStore:
path: /tmp/msp
tlsCerts:
systemCertPool: false
client:
key:
path: ../integrity-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/tls/client.key
cert:
path: ../integrity-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/tls/client.crt
channels:
integrity-channel:
orderers:
- orderer.example.com
peers:
peer0.org1.example.com:
endorsingPeer: true
chaincodeQuery: true
ledgerQuery: true
eventSource: true
peer1.org1.example.com:
endorsingPeer: true
chaincodeQuery: true
ledgerQuery: true
eventSource: true
organizations:
OrdererOrg:
mspid: OrdererOrg
cryptoPath: crypto-config/ordererOrganizations/example.com/users/Admin#example.com/msp
adminPrivateKey:
path: ../integrity-network/crypto-config/ordererOrganizations/example.com/users/Admin#example.com/msp/keystore/f6dc3f715ffd9547e5ff5e3e08d5ac17f1e2b09968d2daba9e7a9a4e374a2fb1_sk
signedCert:
path: ../integrity-network/crypto-config/ordererOrganizations/example.com/users/Admin#example.com/msp/signcerts/Admin#example.com-cert.pem
Org1:
mspid: Org1MSP
cryptoPath: ../integrity-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
peers:
- peer0.org1.example.com
- peer1.org1.example.com
adminPrivateKey:
path: ../integrity-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/keystore/25117a9fcadf7b40ed7dcd29b7a478ca86728e564a8388aa889a5de71dec5df8_sk
signedCert:
path: ../integrity-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts/Admin#org1.example.com-cert.pem
users:
Admin#org1.example.com:
key:
path: ../integrity-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/keystore/25117a9fcadf7b40ed7dcd29b7a478ca86728e564a8388aa889a5de71dec5df8_sk
cert:
path: ../integrity-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts/Admin#org1.example.com-cert.pem
User1#org1.example.com:
key:
path: ../integrity-network/crypto-config/peerOrganizations/org1.example.com/users/User1#org1.example.com/msp/keystore/e318dc3e94283337e3089673c8aca07ce0d6cc8ffdb03984ab2de11ec7ac11dd_sk
cert:
path: ../integrity-network/crypto-config/peerOrganizations/org1.example.com/users/User1#org1.example.com/msp/signcerts/User1#org1.example.com-cert.pem
Org2:
mspid: Org2MSP
cryptoPath: crypto-config/peerOrganizations/org2.example.com/users/Admin#org2.example.com/msp
peers:
- peer0.org2.example.com
- peer1.org2.example.com
adminPrivateKey:
path: ../integrity-network/crypto-config/peerOrganizations/org2.example.com/users/Admin#org2.example.com/msp/keystore/078fca0bf56b77656f745e62100a1fd7d55f5d2c2925b6180daac49b67e64f0d_sk
signedCert:
path: ../integrity-network/crypto-config/peerOrganizations/org2.example.com/users/Admin#org2.example.com/msp/signcerts/Admin#org2.example.com-cert.pem
users:
Admin#org2.example.com:
key:
path: ../integrity-network/crypto-config/peerOrganizations/org2.example.com/users/Admin#org2.example.com/msp/keystore/078fca0bf56b77656f745e62100a1fd7d55f5d2c2925b6180daac49b67e64f0d_sk
cert:
path: ../integrity-network/crypto-config/peerOrganizations/org2.example.com/users/Admin#org2.example.com/msp/signcerts/Admin#org2.example.com-cert.pem
User1#org2.example.com:
key:
path: ../integrity-network/crypto-config/peerOrganizations/org2.example.com/users/User1#org2.example.com/msp/keystore/3fee22d1537bc40b5e3d036919e3651976a92e42df5725983400a4012f5bc138_sk
cert:
path: ../integrity-network/crypto-config/peerOrganizations/org2.example.com/users/User1#org2.example.com/msp/signcerts/User1#org2.example.com-cert.pem
orderers:
orderer.example.com:
url: grpc://localhost:7050
grpcOptions:
ssl-target-name-override: orderer.example.com
peers:
peer0.org1.example.com:
url: grpc://localhost:7051
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
request-timeout: 120001
tlsCACerts:
path: ../integrity-network/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp/tlscacerts/tlsca.org1.example.com-cert.pem
peer1.org1.example.com:
url: grpc://localhost:8051
grpcOptions:
ssl-target-name-override: peer1.org1.example.com
request-timeout: 120001
tlsCACerts:
path: ../integrity-network/crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp/tlscacerts/tlsca.org1.example.com-cert.pem
peer0.org2.example.com:
url: grpc://localhost:9051
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
request-timeout: 120001
tlsCACerts:
path: ../integrity-network/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp/tlscacerts/tlsca.org2.example.com-cert.pem
peer1.org2.net.ink.tum.de:
url: grpc://localhost:10051
grpcOptions:
ssl-target-name-override: peer1.org2.example.com
request-timeout: 120001
tlsCACerts:
path: ../integrity-network/crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/msp/tlscacerts/tlsca.org2.example.com-cert.pem
Note: for some reason I needed the users section, otherwise I would get a user not found. Most examples I found online did not include that section.
You need to use grpcs in your peer URLs:
peers:
peer0.org1.example.com:
url: grpcs://localhost:7051

filebeat configuration using elasticsearch

I am Facing issue with Filebeat,i taken filebeat image docker pull docker.elastic.co/beats/filebeat:6.3.1
my filebeat.yml file is
filebeat.config:
prospectors:
path: ${path.config}/prospectors.d/*.yml
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
processors:
- add_cloud_metadata:
output.elasticsearch:
hosts: ['192.0.0.0:9200']
username: elastic
password: changeme
setup.kibana:
host: '192.0.0.0:5601'
filebeat.inputs:
- type: log
paths:
- /var/log/*.log
When i run filebeat i am getting yum.logs and harvester started for yum.log,Please help me
Thanks in advance.

ovirt:create iso/nfs storage domain erro

I met a problem while creating
"New Domain" iso/nfs storage,it prints "Error while executing action
New NFS Storage Domain: Storage domain remote path not mounted"
and the error code is 477.
I followed the "http://wiki.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues" and find that the vdsm user can't now use mount.
"mount: only root can do that"
the version I use:
oVirt Engine Version: 3.1.0-2.fc17
oVirt Node Hypervisor 2.5.4-0.1.fc17
the error log:
2012-11-08 09:15:00,004 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-77) Checking autorecoverable storage domains done
2012-11-08 09:17:28,920 WARN [org.ovirt.engine.core.bll.GetConfigurationValueQuery] (ajp--0.0.0.0-8009-2) calling GetConfigurationValueQuery (StorageDomainNameSizeLimit) with null version,
using default general for version
2012-11-08 09:17:29,333 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] (ajp--0.0.0.0-8009-3) [7720b88f] START, ValidateStorageServerConnectionVDSCommand(vdsId = 12bcf124-29a4-11e2-bcba-00505680002a, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: 6556c55d-42a4-4dcc-832c-4d8987ebe6bd, connection: 200.200.101.219:/usr/lwq/iso };]), log id: 52777a80
2012-11-08 09:17:29,388 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] (ajp--0.0.0.0-8009-3) [7720b88f] FINISH, ValidateStorageServerConnectionVDSCommand, return: {6556c55d-42a4-4dcc-832c-4d8987ebe6bd=0}, log id: 52777a80
2012-11-08 09:17:29,392 INFO [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (ajp--0.0.0.0-8009-3) [7720b88f] Running command: AddStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System
2012-11-08 09:17:29,404 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-3) [7720b88f] START, ConnectStorageServerVDSCommand(vdsId = 12bcf124-29a4-11e2-bcba-00505680002a, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: 6556c55d-42a4-4dcc-832c-4d8987ebe6bd, connection: 200.200.101.219:/usr/lwq/iso };]), log id: 36cb94f
2012-11-08 09:17:29,656 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-3) [7720b88f] FINISH, ConnectStorageServerVDSCommand, return: {6556c55d-42a4-4dcc-832c-4d8987ebe6bd=477}, log id: 36cb94f
2012-11-08 09:17:29,658 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (ajp--0.0.0.0-8009-3) [7720b88f] The connection with details 200.200.101.219:/usr/lwq/iso failed because of
error code 477 and error message is: 477
2012-11-08 09:17:29,717 INFO [org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand] (ajp--0.0.0.0-8009-11) [1661aa36] Running command: AddNFSStorageDomainCommand internal: false. En
tities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System
2012-11-08 09:17:29,740 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--0.0.0.0-8009-11) [1661aa36] START, CreateStorageDomainVDSCommand(vdsId = 12bcf12
4-29a4-11e2-bcba-00505680002a, storageDomain=org.ovirt.engine.core.common.businessentities.storage_domain_static#4a900545, args=200.200.101.219:/usr/lwq/iso), log id: 50b803a0
2012-11-08 09:17:35,233 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-11) [1661aa36] Failed in CreateStorageDomainVDS method
2012-11-08 09:17:35,234 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-11) [1661aa36] Error code StorageDomainFSNotMounted and error message VDSGeneri
cException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Storage domain remote path not mounted: ('/rhev/data-center/mnt/200.200.101.219:_usr_lwq_iso',)
2012-11-08 09:17:35,260 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp--0.0.0.0-8009-11) [1661aa36] Command org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageD
omainVDSCommand return value
As far as I know VDSM does sudo to run privileged commands, but still that mount error is strange. Can you send share the vdsm log?
Also, you may get more attention on the engine-users or vdsm mailing lists.