Can not install chaincode via CLI - channel

I created and joined the channel, I could not install test chaincode via the CLI. It was successful when I am using Alpha2.
I have the following error message at CLI:
root#18599095828d:/opt/gopath/src/github.com/hyperledger/fabric/peer# peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02
2017-06-13 08:09:17.401 UTC [msp] getMspConfig -> INFO 001 intermediate certs folder not found at [/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/intermediatecerts]. Skipping.: [stat /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/intermediatecerts: no such file or directory]
2017-06-13 08:09:17.401 UTC [msp] getMspConfig -> INFO 002 crls folder not found at [/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/intermediatecerts]. Skipping.: [stat /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/crls: no such file or directory]
2017-06-13 08:09:17.401 UTC [msp] getMspConfig -> INFO 003 MSP configuration file not found at [/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/config.yaml]: [stat /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/config.yaml: no such file or directory]
2017-06-13 08:09:17.424 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-06-13 08:09:17.424 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-06-13 08:09:17.425 UTC [golang-platform] getCodeFromFS -> DEBU 006 getCodeFromFS github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02
Error: Error getting chaincode code chaincode: Error getting chaincode package bytes: Error obtaining imports: go list: failed with error: "exec: not started"
Usage:
peer chaincode install [flags]
Global Flags:
--cafile string Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
-C, --chainID string The chain on which this command should be executed (default "testchainid")
-c, --ctor string Constructor message for the chaincode in JSON format (default "{}")
-E, --escc string The name of the endorsement system chaincode to be used for this chaincode
-l, --lang string Language the chaincode is written in (default "golang")
--logging-level string Default logging level and overrides, see core.yaml for full syntax
-n, --name string Name of the chaincode
-o, --orderer string Ordering service endpoint
-p, --path string Path to chaincode
-P, --policy string The endorsement policy associated to this chaincode
--test.coverprofile string Done (default "coverage.cov")
-t, --tid string Name of a custom ID generation algorithm (hashing and decoding) e.g. sha256base64
--tls Use TLS when communicating with the orderer endpoint
-u, --username string Username for chaincode operations when security is enabled
-v, --version string Version of the chaincode specified in install/instantiate/upgrade commands
-V, --vscc string The name of the verification system chaincode to be used for this chaincode
root#18599095828d:/opt/gopath/src/github.com/hyperledger/fabric/peer#

The issue is that you can no longer use the peer container as the CLI. You should use the fabric-tools container instead.
Please note the change in the e2e compose file here: https://github.com/hyperledger/fabric/blob/master/examples/e2e_cli/docker-compose-cli.yaml#L42
Please try the fabric-tools image and things should work
FYI - the error is due to the fact that the Go compiler is no included in the fabric-peer image and it is now required when using the CLI to package chaincode

Related

Hyperledger Fabric error: "TLS: bad certificate server" when installing chaincode

I'm just starting learning HLF, and I have an error while following tutorial from the docs: link
I downloaded fabric-samples using this command (replaced bit.ly link with the destination):
curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s -- 2.2.2 1.4.9
I run logspout in one terminal and try to execute peer lifecycle chaincode install basic.tar.gz in another one, and this is the result i get
Error: failed to retrieve endorser client for install: endorser client
failed to connect to localhost:7051: failed to create new connection:
context deadline exceeded
Log presented by Logspout:
peer0.org1.example.com|2022-03-15 13:03:24.452 UTC [core.comm]
ServerHandshake -> ERRO 04a Server TLS handshake failed in 2.650245ms
with error remote error: tls: bad certificate server=PeerServer
remoteaddress=172.22.0.1:61126
I set the envs in terminal as instructed in the docs, and I checked that CORE_PEER_TLS_ROOTCERT_FILE variable points to an existing file. The content of the file is the same as on the container.
What I tried to do:
download fabric-samples again and redo all the setup with copy-pasting the commands directly from docs
Do you have any suggestions where I can look for an issue?
I resolved the problem, I was using peer version 2.2.1 from previous experiments, it probably collided with FABRIC_CFG_PATH

Attempting to install to be continuous with OpenShift 4 and self-managed GitLab

Following the instructions from here, I'm attempting to get to-be-continuous up and running.
I've created the empty to-be-continuous root group and the Maintainer non-individual GitLab account, and generated its appropriately scoped personal access token.
Upon executing the curl command to recursively copy the tbc group, I notice that the tools sub-group isn't cloned.
Seeing that the tracking repo from the tools group is required for the next step, I manually created the tools sub-group and individually manually cloned each of the repos under it, effectively mirroring the structure and content of the authoritative tbc repo.
Additionally I've configured my self-hosted GitLab's CA in the OpenShift GitLab runner so that I no longer get x509 errors.
With the above in place, including an available GitLab runner on my OpenShift cluster, I attempted to manually run the tracking repo's pipeline (as I understand this to be prerequisite to any other pipeline runs?).
The GitLab runner seemed to pick up the pipeline, as runner's log scrolled off the following:
Checking for jobs... received [0;m job[0;m=6103 repo_url[0;m=https://git.corp.odfl.com/to-be-continuous/tools/tracking.git runner[0;m=b3CyGtqD
Checking for jobs... received [0;m job[0;m=6104 repo_url[0;m=https://git.corp.odfl.com/to-be-continuous/tools/tracking.git runner[0;m=b3CyGtqD
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
Checking for jobs... received [0;m job[0;m=6105 repo_url[0;m=https://git.corp.odfl.com/to-be-continuous/tools/tracking.git runner[0;m=b3CyGtqD
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
[0;33mWARNING: Job failed: command terminated with exit code 1[0;m [0;33mduration_s[0;m=9.30956493 [0;33mjob[0;m=6103 [0;33mproject[0;m=876 [0;33mrunner[0;m=b3CyGtqD
[0;33mWARNING: Failed to process runner [0;m [0;33mbuilds[0;m=2 [0;33merror[0;m=command terminated with exit code 1 [0;33mexecutor[0;m=kubernetes [0;33mrunner[0;m=b3CyGtqD
[0;33mWARNING: Job failed: command terminated with exit code 1[0;m [0;33mduration_s[0;m=9.808499871 [0;33mjob[0;m=6105 [0;33mproject[0;m=876 [0;33mrunner[0;m=b3CyGtqD
[0;33mWARNING: Failed to process runner [0;m [0;33mbuilds[0;m=1 [0;33merror[0;m=command terminated with exit code 1 [0;33mexecutor[0;m=kubernetes [0;33mrunner[0;m=b3CyGtqD
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
[31;1mERROR: Could not create cache adapter [0;m [31;1merror[0;m=cache factory not found: factory for cache adapter "" was not registered
Job succeeded [0;m duration_s[0;m=30.342517342 job[0;m=6104 project[0;m=876 runner[0;m=b3CyGtqD
At the same time, the pipeline log on GitLab shows the following:
Running with gitlab-runner 14.1.0 (8925d9a0)
on gitlab-runner-runner-5bc5455cfb-pmrpl b3CyGtqD
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: dle-test
Using Kubernetes executor with image hadolint/hadolint:latest-alpine ...
Using attach strategy to execute scripts...
Preparing environment
00:07
Waiting for pod dle-test/runner-b3cygtqd-project-876-concurrent-0fvm2z to be running, status is Pending
Waiting for pod dle-test/runner-b3cygtqd-project-876-concurrent-0fvm2z to be running, status is Pending
ContainersNotInitialized: "containers with incomplete status: [init-logs]"
ContainersNotReady: "containers with unready status: [build helper]"
ContainersNotReady: "containers with unready status: [build helper]"
Running on runner-b3cygtqd-project-876-concurrent-0fvm2z via gitlab-runner-runner-5bc5455cfb-pmrpl...
Getting source from Git repository
00:01
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/b3CyGtqD/0/to-be-continuous/tools/tracking/.git/
Created fresh repository.
Checking out e31d6d28 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:01
$ # BEGSCRIPT # collapsed multi-line command
/scripts-876-6103/step_script: eval: line 162: can't create /etc/ssl/certs/ca-certificates.crt: Permission denied
Uploading artifacts for failed job
00:00
Uploading artifacts...
WARNING: reports/hadolint-*.json: no matching files
ERROR: No files to upload
Uploading artifacts...
WARNING: reports/hadolint-*.json: no matching files
ERROR: No files to upload
Cleaning up file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
Having spent quite a few hours getting this far, I'm stumped. Any idea what I'm doing wrong?
Added kaniko log as requested:
Running with gitlab-runner 14.1.0 (8925d9a0)
on gitlab-runner-runner-5bc5455cfb-4ggsp n8KiyZgX
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: dle-test
Using Kubernetes executor with image gcr.io/kaniko-project/executor:debug ...
Using attach strategy to execute scripts...
Preparing environment
00:13
Waiting for pod dle-test/runner-n8kiyzgx-project-876-concurrent-0knvl9 to be running, status is Pending
Waiting for pod dle-test/runner-n8kiyzgx-project-876-concurrent-0knvl9 to be running, status is Pending
ContainersNotInitialized: "containers with incomplete status: [init-logs]"
ContainersNotReady: "containers with unready status: [build helper]"
ContainersNotReady: "containers with unready status: [build helper]"
Waiting for pod dle-test/runner-n8kiyzgx-project-876-concurrent-0knvl9 to be running, status is Pending
ContainersNotReady: "containers with unready status: [build helper]"
ContainersNotReady: "containers with unready status: [build helper]"
Waiting for pod dle-test/runner-n8kiyzgx-project-876-concurrent-0knvl9 to be running, status is Pending
ContainersNotReady: "containers with unready status: [build helper]"
ContainersNotReady: "containers with unready status: [build helper]"
Running on runner-n8kiyzgx-project-876-concurrent-0knvl9 via gitlab-runner-runner-5bc5455cfb-4ggsp...
Getting source from Git repository
00:02
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/n8KiyZgX/0/to-be-continuous/tools/tracking/.git/
Created fresh repository.
Checking out e31d6d28 as master...
Skipping Git submodules setup
Restoring cache
00:00
Checking cache for master-docker-2...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
Downloading artifacts
00:01
Downloading artifacts for docker-hadolint (6121)...
Downloading artifacts from coordinator... ok id=6121 responseStatus=200 OK token=LRUFpXw7
WARNING: reports/hadolint-dde65eefd6c9a71b70c22f15c806082e.json: lchown reports/hadolint-dde65eefd6c9a71b70c22f15c806082e.json: operation not permitted (suppressing repeats)
Downloading artifacts for go-build-test (6122)...
Downloading artifacts from coordinator... ok id=6122 responseStatus=200 OK token=nqXz2-2P
WARNING: bin/: lchown bin/: operation not permitted (suppressing repeats)
Executing "step_script" stage of the job script
00:08
$ # BEGSCRIPT # collapsed multi-line command
[WARN] =======================================================================================================
[WARN] The template docker:1.2.0 you're using is not up-to-date: consider upgrading to version 2.1.1
[WARN] (set $TEMPLATE_CHECK_UPDATE_DISABLED to disable this message)
[WARN] =======================================================================================================
[INFO] Custom CA certificates configured in /kaniko/ssl/certs/ca-certificates.crt
[INFO] Docker authentication configured for
$ run_build_kaniko "$DOCKER_SNAPSHOT_IMAGE" --build-arg http_proxy="$http_proxy" --build-arg https_proxy="$https_proxy" --build-arg no_proxy="$no_proxy"
[INFO] Build & deploy image /snapshot:master
[INFO] Kaniko command: /kaniko/executor --context . --dockerfile ./Dockerfile --destination /snapshot:master --cache --cache-dir=/builds/n8KiyZgX/0/to-be-continuous/tools/tracking/.cache --verbosity info --build-arg CI_PROJECT_URL --build-arg TRACKING_CONFIGURATION --build-arg http_proxy= --build-arg https_proxy= --build-arg no_proxy=
E1013 18:05:11.931688 44 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "/snapshot:master": GET https://index.docker.io/v2/snapshot/blobs/uploads/: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:snapshot Type:repository]]
Uploading artifacts for failed job
00:01
Uploading artifacts...
WARNING: docker.env: no matching files
ERROR: No files to upload
Cleaning up file based variables
00:00
ERROR: Job failed: command terminated with exit code 1
First of all thanks for your feedback. I thoroughly investigated and you're right: we've recently introduced a bug in our gitlab-sync.sh script that prevented it from recursing :(
A fix is on its way, you should be able to retry it once it's merged.
About your second issue, the logs clearly suggest the hadolint job failed importing your custom CA certificates, but that should not happen using the hadolint/hadolint:latest-alpine image.
See the same job logs on gitlab.com:
[INFO] Custom CA certificates imported in /etc/ssl/certs/ca-certificates.crt
I don't see clearly where the problem could come from.
A few questions to help me investigate:
which kind of GitLab runners did you configure ?
which technique did you use to configure your custom CA certificates ? did you configure a global DEFAULT_CA_CERTS as suggested in our doc ?
is docker-hadolint the only job to fail ? You should also have go-build-test and go-ci-lint on the same stage that also import the custom CA certificates in the same way...

Fabric8 / Minikube: Builds in Jenkins are failing due to authorization problems

I wanted to learn more about Fabric8, however, it is not possible to build even a very simple project. I am running it locally on a Minikube cluster.
The setup is:
Mac OS Sierra
Minikube v0.18.0
Fabric8 v0.4.122
So I have a simple Spring Boot application in the local Gogs repository. The builds are failing with this message:
/usr/bin/git checkout -f d8af29f8af7a498331a244d245fb321003ef110d
/usr/bin/git rev-list d8af29f8af7a498331a244d245fb321003ef110d # timeout=10
[Pipeline] End of Pipeline
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
at io.fabric8.kubernetes.client.utils.HttpClientUtils.createHttpClient(HttpClientUtils.java:153)
[...]
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
So I took the ca.crt from Minikube (~/minikube/ca.crt) and added it (base64-encoded) to the jenkins-git-ssh secret which gets mounted in the Jenkins pod in /var/run/secrets/kubernetes.io/serviceaccount. The next build ended with this error:
/usr/bin/git checkout -f d8af29f8af7a498331a244d245fb321003ef110d
/usr/bin/git rev-list d8af29f8af7a498331a244d245fb321003ef110d # timeout=10
[Pipeline] End of Pipeline
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default/. Message: Unauthorized
.
The same happens when I use apiserver.crt from Minikube.
When using ca.pem instead I get:
Caused by: java.security.cert.CertificateException: Unable to initialize, java.io.IOException: extra data given to DerValue constructor
at sun.security.x509.X509CertImpl.<init>(X509CertImpl.java:198)
at sun.security.provider.X509Factory.engineGenerateCertificate(X509Factory.java:102)
I can access the Kubernetes API from the Jenkins pod only when adding both apiserver.crt and apiserver.key to the secret. Executing
curl -k --cert apiserver.crt --key apiserver.key https://kubernetes.default/.
is successful then - but the Jenkins build is still failing.
So Im a bit lost here. Does anybody have an idea how to continue?
Thanks and regards,
Daniel
we have a fix but it's not released yet. Details can be found https://github.com/fabric8io/fabric8/issues/6829#issuecomment-301467664 which also describes a workaround.
TL;DR you can edit the jenkins service account and remove the following lines before restarting the jenkins master pod:
-secrets:
-- name: "jenkins-git-ssh"
-- name: "jenkins-master-ssh"
-- name: "jenkins-release-gpg"
Hope that helps.

Bazel build fails with "Executing genrule #six_archive//:copy_six failed" error while building syntaxnet

I'm trying to follow the instructions at syntaxnet's github page to build syntaxnet parser models.
My system is a Debian Wheezy. Shouldn't be very different from Ubuntu 14.04 LTS or 15.05. I have compiled bazel 0.2.2 (as opposed to 0.2.2b) from source and it appears to work correctly.
Whenever I launch the bazel test syntaxnet/... util/utf8/... command, no tests are executed (all skipped) with some quite cryptic error messages. Here's an example:
root#host:~/tensorflow_syntaxnet/models/syntaxnet# ../../bazel/output/bazel test syntaxnet/... util/utf8/...
Extracting Bazel installation...
.............
INFO: Found 65 targets and 12 test targets...
ERROR: /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/external/six_archive/BUILD:1:1: Executing genrule #six_archive//:copy_six failed: namespace-sandbox failed: error executing command /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox ... (remaining 5 argument(s) skipped).
unshare failed with EINVAL even after 101 tries, giving up.
INFO: Elapsed time: 95.469s, Critical Path: 22.46s
//syntaxnet:arc_standard_transitions_test NO STATUS
//syntaxnet:beam_reader_ops_test NO STATUS
//syntaxnet:graph_builder_test NO STATUS
//syntaxnet:lexicon_builder_test NO STATUS
//syntaxnet:parser_features_test NO STATUS
//syntaxnet:parser_trainer_test NO STATUS
//syntaxnet:reader_ops_test NO STATUS
//syntaxnet:sentence_features_test NO STATUS
//syntaxnet:shared_store_test NO STATUS
//syntaxnet:tagger_transitions_test NO STATUS
//syntaxnet:text_formats_test NO STATUS
//util/utf8:unicodetext_unittest NO STATUS
Executed 0 out of 12 tests: 12 were skipped.
I'm using Oracle Java 8 JDK as recommended, and my compiler is:
~/tensorflow_syntaxnet/models/syntaxnet# gcc --version
gcc (Debian 4.7.2-5) 4.7.2
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Tried looking into the namespace-sandbox binary that's mentioned in the error message, but before I dive deep into this, I thought I'd ask here.
~/tensorflow_syntaxnet/models/syntaxnet# ls -l /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox
lrwxrwxrwx 1 root root 108 May 13 14:52 /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox -> /root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox
~/tensorflow_syntaxnet/models/syntaxnet# readlink /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox
/root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox
Command seems to work fine though:
~/tensorflow_syntaxnet/models/syntaxnet# file $(readlink /root/.cache/bazel/_bazel_root/74c6bab7a21f28ad02405b720243d086/syntaxnet/_bin/namespace-sandbox)
/root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.26, BuildID[md5/uuid]=0xecfd97b6a6b9a193b045be13654bd55b, not stripped
~/tensorflow_syntaxnet/models/syntaxnet# /root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox
No command specified.
Usage: /root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox [-S sandbox-root] -- command arg1
provided: /root/.cache/bazel/_bazel_root/install/ca381eaad1c931167a6355cb8a2b98cf/_embedded_binaries/namespace-sandbox
Mandatory arguments:
-S <sandbox-root> directory which will become the root of the sandbox
-- command to run inside sandbox, followed by arguments
Optional arguments:
-W <working-dir> working directory
-T <timeout> timeout after which the child process will be terminated with SIGTERM
-t <timeout> in case timeout occurs, how long to wait before killing the child with SIGKILL
-d <dir> create an empty directory in the sandbox
-M/-m <source/target> system directory to mount inside the sandbox
Multiple directories can be specified and each of them will be mounted readonly.
The -M option specifies which directory to mount, the -m option specifies where to
mount it in the sandbox.
-n if set, a new network namespace will be created
-r if set, make the uid/gid be root, otherwise use nobody
-D if set, debug info will be printed
-l <file> redirect stdout to a file
-L <file> redirect stderr to a file
#FILE read newline-separated arguments from FILE
Any idea?
UPDATE: I have done exactly the same steps on a Ubuntu 14.04 LTS (my small workstation, as opposed to the production server running Debian) and everything works well there, with all tests passing. I wonder what's the difference.
Apparently some permission errors happens when setting up the sandbox. A quick workaround is to deactivate the sandbox by using --genrule_strategy=standalone --spawn_strategy=standalone (note that the second one is already specified in the TensorFlow rc file).
You can set those flag in your ~/.bazelrc:
echo "build --genrule_strategy=standalone --spawn_strategy=standalone" >>~/.bazelrc

cloud VM instance broken packages after updating packages to earlier version

I did a apt-get upgrade because the load times of our production server were about 40 seconds. I don't have a snapshot before nor after the upgrade.(Although there is a snapshot of six months old) Load times improved to 15-ish seconds but our erizo service stopped working. Erizo was also running on that instance. Restarting the services didn't help so I tried upgrading the packages to the previous version (https://askubuntu.com/questions/138284/how-to-downgrade-a-package-via-apt-get), just like it was but on almost every package there was an error: the previous package version did not excist.(which is strange, because I copied the output of dpkg -l)
Only a few of them were successfully downgraded but I got a serious error when upgrading e1fslibs to it's previous version.:The following packages have unmet dependencies:
e2fsprogs: PreDepends: e2fslibs
Somehow that messed up initramfs and/or initramfs-tools and now the instance is running but I can't get into it.
Connecting to the instance in google cloud platform :Connecting...
Could not connect, retrying (1/3).
google cloud shell isn't able to gcloud compute ssh : Permission denied (publickey).
using gcloud locally also says Permission denied (publickey).
I checked the following:
There are project public keys defined; there aren't any instance public keys defined or any other metadata ( Google Cloud SSH Keys )
In google cloud platform >> compute engine >> VM instances >> permissions>> I see 'compute' is disabled
verify that the daemon is running by navigating to the serial console output page and looking for output lines prefixed with the accounts-from-metadata: string. If you are using a standard image but you do not see these output prefixes in the serial console output, the daemon might be stopped--> I don't see this so I expect it's NOT running.
check firewall rules:(gcloud compute firewall-rules list)
default-allow-ssh default 0.0.0.0/0 tcp:22 //rule is present
Following packages were upgraded:
apt
apt-transport-https
apt-utils
binutils
cloud-init
cloud-initramfs-growroot
cloud-initramfs-rescuevol
comerr-dev
dosfstools
e2fslibs
e2fsprogs
gce-cloud-config
gce-daemon
gce-imagebundle
gce-startup-scripts
google-cloud-sdk
landscape-client
landscape-common l
ibapt-inst1.4 libapt-pkg4.12
libcomerr2
libss2
libudev0 mountall
nginx
nginx-common
nginx-full
ntp
ntpdate
procps
python-apt
python-apt-common
python-lazr.restfulclient
udev
unattended-upgrades
update-manager-core
upstart
whoopsie
x11-utils
This is get from the serial output ::
- mountall: Event failed
- landscape-client is not configured, please run landscape-config.
What to do next?
Apply a startup script to running instance (following this https://cloud.google.com/compute/docs/startupscript) and try to perform Apt-get upgrade ?
try to create a new public key (again) in google cloud shell to access the instance?
In google cloud shell the first time this file was generated after typing gcloud compute --project "enduring-palace-762" ssh --zone "europe-west1-c" "tta-media-test-2"
WARNING: The private SSH key file for Google Compute Engine does not exist.WARNING: You do not have an SSH key for Google Compute Engine.WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key. This tool needs to create the directory /home/developer/.ssh
the generated public key was stored in /home/developer/.ssh /google_compute_engine.pub I made a copy of that, prepended the username and added the content of the public key to compute engine >> metadata>>ssh keys. *key is accepted but the username doesn't show like it does with all the other username - key pairs
I get Permission denied (publickey) error though when using gcloud compute ssh tta-media-test-2 --zone europe-west1-c
When I provide the ssh key file like this
gcloud compute ssh tta-media-test-2 --zone europe-west1-c --ssh-key-file=my-ssh-keys_copy.pub (pwd is inside the folder where key file is)
WARNING: The public SSH key file for Google Compute Engine does not exist.
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
I get same result when i generate a new key with ssh-keygen -t rsa -f my-ssh-keys
Any other possible solution would be much appreciated.
[update] I am able to ssh the 'broken' instance from local using ssh user#externalIpOfInstance My plan is to bring it to a upgraded stable state, create a snapshot and see from there..
sudo apt-get -f install
0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Setting up initramfs-tools (0.99ubuntu13.5) ...
update-initramfs: deferring update (trigger activated)
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.13.0-79-generic
E: /usr/share/initramfs-tools/hooks/fixrtc failed with return 1.
update-initramfs: failed for /boot/initrd.img-3.13.0-79-generic with 1.
dpkg: error processing initramfs-tools (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)
sudo apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages have been kept back:
google-chrome-stable
The following packages will be upgraded:
comerr-dev libcomerr2 libss2 unattended-upgrades
4 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
1 not fully installed or removed.
Need to get 0 B/188 kB of archives.
After this operation, 4,096 B of additional disk space will be used.
Do you want to continue [Y/n]? y
Preconfiguring packages ...
(Reading database ... 178509 files and directories currently installed.)
Preparing to replace comerr-dev 2.1-1.42-1ubuntu2.2 (using .../comerr-dev_2.1-1.42-1ubuntu2.3_amd64.deb) ...
Unpacking replacement comerr-dev ...
Preparing to replace libcomerr2 1.42-1ubuntu2.2 (using .../libcomerr2_1.42-1ubuntu2.3_amd64.deb) ...
Unpacking replacement libcomerr2 ...
Preparing to replace libss2 1.42-1ubuntu2.2 (using .../libss2_1.42-1ubuntu2.3_amd64.deb) ...
Unpacking replacement libss2 ...
Preparing to replace unattended-upgrades 0.76ubuntu1.1 (using .../unattended-upgrades_0.76ubuntu1.2_all.deb) ...
Unpacking replacement unattended-upgrades ...
Processing triggers for install-info ...
Processing triggers for man-db ...
Processing triggers for ureadahead ...
Setting up initramfs-tools (0.99ubuntu13.5) ...
update-initramfs: deferring update (trigger activated)
Setting up libcomerr2 (1.42-1ubuntu2.3) ...
Setting up comerr-dev (2.1-1.42-1ubuntu2.3) ...
Setting up libss2 (1.42-1ubuntu2.3) ...
Setting up unattended-upgrades (0.76ubuntu1.2) ...
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.13.0-79-generic
E: /usr/share/initramfs-tools/hooks/fixrtc failed with return 1.
update-initramfs: failed for /boot/initrd.img-3.13.0-79-generic with 1.
dpkg: error processing initramfs-tools (--configure):
subprocess installed post-installation script returned error exit status 1
No apport report written because MaxReports is reached already
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
Errors were encountered while processing:
initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)
sudo apt-get remove initramfs-tools-bin
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
cron : Depends: adduser but it is not going to be installed
procps : Depends: initscripts
upstart : Depends: initscripts
Depends: mountall
Depends: ifupdown (>= 0.6.10ubuntu5)
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
what to do here?
If you were able to SSH into the instance using a given SSH key before, the most likely reason it would stop working is if you somehow removed that SSH key or if the SSH daemon wasn't running/was otherwise broken. It appears as though in the downgrade you broke this machine.
Why do you need this particular VM instance? Does it have important data? If so, you can shut it off, mount its disk using a fresh VM instance, and copy that data off.
If it runs a service, you should probably cut over to a new machine: even if you're able to get into the instance, there's no telling what still works and what doesn't.
i'm facing issue in bigbluebutton insatllation
Reading state information...
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies:
bigbluebutton : Depends: bbb-config but it is not going to be installed
gce-compute-image-packages : Depends: google-compute-engine but it is not going to be installed
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).