Unable to instantiate the chaincode in muticloud setup - virtual-machine

I am trying to achieve the multicloud architecture. My network has 2 peers, 1 orderer and a webclient. This network is in Azure. I am trying to add a peer from Google Cloud Platform to the channel of Azure. For this, I created a crypto-config for 3rd peer from Azure webclient. But in the crypto-config, I made the changes like peers in Azure have their own certificates while for the 3rd peer, I placed the newly created certificates. Now I can install, instantiate, invoke and do queries in the peers(1 and 2). And I can install the chaincodes in 3rd peer. But I am unable to instantiate the chaincodes.
Getting the following error: Error: could not assemble transaction, err proposal response was not successful, error code 500, msg error starting container: error starting container: Post http://unix.sock/containers/create?name=dev-(CORE_PEER_ID)-documentCC-1: dial unix /var/run/docker.sock: connect: permission denied
Can anyone guide me on this.
Note: All the peers, orderer, webclient are running in different vm(s)

#soundarya
It doesn’t matter how many places your solution is deployed
The problem is you are running docker by using sudo command try to add docker to sudo group
Below block will help you out
https://www.digitalocean.com/community/questions/how-to-fix-docker-got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket
To learn more concept about docker.sock
You can refer to my answer in another Can anyone explain docker.sock

Related

Calling a Jenkins job from a Codefresh pipeline fails with: x509: failed to load system roots and no roots provided

I have a Jenkins job which I would like to invoke from my Codefresh pipeline.
Using the following example from the Codefresh docs, I have my Codefresh pipeline configured and ready:
https://codefresh.io/docs/docs/integrations/jenkins-integration/#calling-jenkins-jobs-from-codefresh-pipelines
The resulting build runs with the following output:
Pulling image codefresh/cf-run-jenkins-job:latest
Pulled layer '1160f4abea84'
Pulled layer '6df1582e0e0e'
Digest: sha256:a95b23c24b51d5fc1705731f7d18c5134590b4bc61b91dcf5a878faf2aec60b3
Status: Downloaded newer image for codefresh/cf-run-jenkins-job:latest
INFO[0000] Going to trigger <jenkins_job_name> job on https://<jenkins_host>:8443
ERRO[0000] Post https://<jenkins_host>:8443/job/<jenkins_job_name>/build: x509: failed to load system roots and no roots provided
Successfully ran freestyle step: Triggering Jenkins Job
Reading environment variable exporting file contents.
Reading environment variable exporting file contents.
As you can see, the build fails to successfully trigger the Jenkins job.
After some research in the Internet I came to conclusion that this is an SSL certificate issue.
But I have no idea how to proceed from here on. What exactly is missing and where it should be configured. I would really appreciate any help here.
Do you know that kind of SSL configuration your Jenkins server has? Is it mutual authentication or just a server-side certificate? Is it self-signed or not?
Have you tried to call the Jenkins API on your own (outside of Codefresh) and SSL works fine?
Also I would suggest you open a support ticket (from the top right menu in the Codefresh UI) and make sure to mention the URL of the build that has this issue.

Hyperledger Fabric - backup and restore

I'm using Hyperledger Fabric and now I'm trying to make a backup of the current situation and restore it on a different computer.
I'm following the procedure found in hyperledger-fabric-backup-and-restore.
The main steps being:
Copy the crypto-config and the channel-artifacts directory
Copy the content of all peers and orderer containers
Modify the docker-compose.yaml to link containers volumes to the local directory where I have the backup copy.
Yet it's not working properly in my case: when I restart the network with ./byfn.hs up I first have all the containers correctly up and running then, whatever operation I try and execute on the channel (peer channel create, peer channel join, peer channel update) fails with error:
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating ReadSet: proposed update requires that key [Group] /Channel/Application be at version 0, but it is currently at version 1
Is there anything I should do which is not mentioned on hyperledger-fabric-backup-and-restore ?
I got the same error while trying to create a channel. Turning the "network down" and then "network up" solved my problem.

Impossible to install python package with anaconda on corporate laptop

I have anaconda install on my corporate laptop. I want to install 2 python packages( Potply & Fuzzywuzzy) but each time I try I have the same error message
Solving environment: failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url
<https://repo.anaconda.com/pk
gs/r/win-64/repodata.json.bz2>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on
your way.
If your current network has https://www.anaconda.com blocked, please file
a support request with your network engineering team.
ConnectTimeout(MaxRetryError("HTTPSConnectionPool
(host='repo.anaconda.com',pot
=443): Max retries exceeded with url: /pkgs/r/win-64/repodata.json.bz2
(Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection
object at 0x00000000054D45F8>, 'Connection to repo.anaconda.com timed out.
(connect timeout=9.
15)'))"))
I have tried to use use the command :
conda config --set ssl_verify no
or
conda config --set ssl_verify false
but none of them are working for me. Also because it's my company laptop I am not admin so I am not able to change the firewall and connection properties and I am not able to contact the service desk to help me on that.
So I 'll be more than happy to hear your solution(s).

Openshift git push error

I recently installed an openshift instance with 2 brokers 2 nodes and 3 mongodb/active mq nodes.
I used the openshift origin puppet module and it is mostly working ok.
I can create, move & deploy normal and scalable applications but when I push changes to my gear (using git) I get the following error message:
Failed to report deployment to broker. This will be corrected on the next git push. Message: Connection reset by peer - SSL_connect
The push itself is successful:
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
But I always get this error message when I push.
I checked the node and broker logs I tried to tcpdump the servers and use wireshark to check the communication between node and broker and I tried to google the error and came up with mostly nothing.
I also went over the deployment guide and checked the installation and everything seems to be in order.
When I:
curl https://MyBroker/broker/rest/api
I get an api response and not an SSL error:
{"api_version":1.7,"data":{"API": {"href":"https://MyBroker/broker/rest/api","method":"GET","optional_params":[] ..
Any help will be appreciated.
Thank you
Keren

ServiceStack.Redis: Unable to Connect: sPort: 50071

I'm using the ServiceStack Redis Client and I was hoping that I could get a clarification on what might cause the following error ... "Unable to Connect: sPort: 50071"? I'm using the "PooledRedisClientManager" object for connections. Thanks for any assistance.
IF YOU ARE USING A SELF HOSTED REDIS SERVER AND USING THE Service Stack Redis Client THEN BUYER BEWARE
As of 9/23/2015
Service Stack does license validation in the client code (rather than the server). If you are ripping through a lot of messages 6000+ an hour you will get. The resulting error is
Unable to Connect: sPort:
However, it is not handling their custom LicenseExpection and exposing the error correctly. The error would be something like this:
The free-quota limit on '6000 Redis requests per hour' has been reached. Please see https://servicestack.net to upgrade to a commercial license or visit https://github.com/ServiceStackV3/ServiceStackV3 to revert back to the free ServiceStack v3.
I doubt you have imposed such a limit on your server :-)
This could be a time out issue, try increasing it:
pooledRedisClientManager.ConnectTimeout = 1000
You need to check that you are not creating a new PooledRedisClientManager for each request / usage. You will quickly run out of ports. Use a singleton approach in a web environment.