aioamqp can't connect to rabbitmqserver - rabbitmq

The server error is:
but when I use pika to connect to the rabbitmq server with the same parameter such as same username, password, virtual host, the pika can connect to the server.
the Error code is blow:
transport, protocol = await aioamqp.connect(host='localhost',
virtualhost='/', port=5672, ssl=False, insist=True,
login_method='AMQPLAIN') # use default parameters
How can I solve it?

I have fixed the error, and the solution is below:
1\ the master branch code did not correspond to aioamqp pypy code.
2\ so I git clone the master branch code to my project and make some test for it, then id did well for me.

Related

Datastax cassandra cpp_driver hangs when connecting to node

I setup a ScyllDB on my Debian 9.6 machine. When I run cqlsh I can connect to it and create tables, do queries etc..
Now I tried to write a simple program in C++ using the Datstax driver and it can't connect. It always blocks when it tries to connect.
The scylla package I installed is:
scylla | 3.0.11-0.20191126.3c91bad0d-1~stretch
cpp_driver is the current master from github: https://github.com/datastax/cpp-driver
Now I tried to run the examples/simple project which is included in the driver, so I assume that it should work, but it shows the same problem. I don't get any errors it just blocks
CassCluster* cluster = cass_cluster_new();
CassSession* session = cass_session_new();
char* hosts = "127.0.0.1";
cass_cluster_set_contact_points(cluster, hosts);
cass_cluster_set_protocol_version(cluster, CASS_PROTOCOL_VERSION_V4);
connect_future = cass_session_connect(session, cluster);
// here it blocks now forever...
er = cass_future_error_code(connect_future);
I also tried to run it on an Ubuntu 16.04 but it shows the same problem. Since the connect works, using the cqlsh I think it shouldn't be a configuration problem, but rather something with the cpp_driver.
I also traced the TCP connection, and I can see that the cpp_driver talks to the server, which looks similar to cqlsh conversation.
I finally found the solution for this issue. We were using cpp_driver 2.15.1 which apparently got some change in the even handling according to their release notes. When I downgraded to 2.15.0 the problem was gone and connection could be successfully established.

Unable to instantiate the chaincode in muticloud setup

I am trying to achieve the multicloud architecture. My network has 2 peers, 1 orderer and a webclient. This network is in Azure. I am trying to add a peer from Google Cloud Platform to the channel of Azure. For this, I created a crypto-config for 3rd peer from Azure webclient. But in the crypto-config, I made the changes like peers in Azure have their own certificates while for the 3rd peer, I placed the newly created certificates. Now I can install, instantiate, invoke and do queries in the peers(1 and 2). And I can install the chaincodes in 3rd peer. But I am unable to instantiate the chaincodes.
Getting the following error: Error: could not assemble transaction, err proposal response was not successful, error code 500, msg error starting container: error starting container: Post http://unix.sock/containers/create?name=dev-(CORE_PEER_ID)-documentCC-1: dial unix /var/run/docker.sock: connect: permission denied
Can anyone guide me on this.
Note: All the peers, orderer, webclient are running in different vm(s)
#soundarya
It doesn’t matter how many places your solution is deployed
The problem is you are running docker by using sudo command try to add docker to sudo group
Below block will help you out
https://www.digitalocean.com/community/questions/how-to-fix-docker-got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket
To learn more concept about docker.sock
You can refer to my answer in another Can anyone explain docker.sock

Can someone please tell me how to define a check_disk service with check_nrpe in icinga 2?

I'm trying to check disk status of client ubuntu 16.04 instance using icinga2 master server. In here I tried to use nrpe plugin for check disk status. I faced trouble When I'm going to define service in service.conf file. Please, can someone tell me what the correct files that should be changed when using nrpe are. Because I'm new to Icinga and nrpe.
I was able to find the solution to my problem. I hope to put it here because It may help someone's need.
Here I carried check_load example to the explain.
First of all, you need to create .conf file (name: 192.168.30.40-host.conf)regarding the client-server that you are going to monitor using icinga2. It should be placed on /etc/icinga2/conf.d/ folder
/etc/icinga2/conf.d/192.168.30.40-host.conf
object Host "host1" {
import "generic-host"
display_name = "host1"
address = "192.168.30.40"
}
you should create a service file for your client.
/etc/icinga2/conf.d/192.168.30.40-service.conf
object Service "LOAD AVERAGE" {
import "generic-service"
host_name = "host1"
check_command = "nrpe"
vars.nrpe_command = "check_load"
}
This is an important part of the problem. You should add this line to your nrpe.cfg file in Nagios server.
/etc/nagios/nrpe.cfg file
command[check_load]=/usr/lib64/nagios/plugins/check_load -w 15,10,5 -c 20,15,10
4.make sure to restart icinga2 and Nagios servers after making any change.
You could also use an icinga2 agent instead of nrpe. The agent will be able to receive its configuration from a master or satellite, and perform local checks on the server.

Connect via ssh to CF - Error

I need to debug my application , we are using version 2.65 (Diego)
.
I use the following wiki
http://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html
while running cf ssh myapp via cli
nothing happens , what should I do in order
1. To see the container FS
2. To be able to debug it ?
The application was deployed successfully to CF.
Im using nodejs app.
all other commands are working well.
When I run the command cf ssh myapp I got this error after two minutes :
FAILED
Error opening SSH connection: dial tcp 52.23.201.1:2277: getsockopt: operation timed out
It sounds like a platform issue using non-standard ssh port.
You can find a bit more ssh access manual steps/troubleshooting tips on https://docs.cloudfoundry.org/devguide/deploy-apps/ssh-apps.html
If you believe it is a instance issue, you can download a copy of droplet/fs using api, more on https://apidocs.cloudfoundry.org/213/apps/downloads_the_staged_droplet_for_an_app.html

Setting Up Fabric SSH, Error:Timed Out

I'm new to Fabric, so this might have a simple answer I've missed due to bad search terminology.
I'm trying to start a new ubuntu EC2 instance in AWS, then connect to it with Fabric and have it execute several commands. However, it seems there is a problem with Fabric's SSH connection, maybe I'm defining some env variable wrong?
#task //starts new EC2 instance and sets env variables
def prep_deploy():
//code to start new EC2 instance, named buildhost
env.hosts=[buildhost.public_dns_name]
env.user = "ubuntu"
env.key_filename = ".../keypair.pem"
env.port = 22
#task
def deploy():
run("echo $HOME") //code fails here
....
I run fab prep_deploy deploy, since I read you need a new task for the new env variables to take effect.
I get
Fatal error: Timed out trying to connect to ...amazonaws.com (tried 1 time)
Underlying exception: timed out
The security groups for the instance are open to SSH: I can connect through Putty. In fact, if I empty the `env.host_string' variable at the start of deploy(), when it prompts me to manually input a host, I can write in "ubuntu#...amazonaws.com:22", with the host name exactly as seen from output at the task start, and it will connect to the instance. But I can't figure how to manipulate the environment variables so that it understands the host name.
It looks like your fabric settings are correct with the use of variables. I was able to use the code you provided to connect to my Ubuntu VM. I am wondering if you are having a connection issue due to the amazon Instance not being fully booted and ready for connections when your script runs the second task. I have run into that issue on a different VM hosts.
I added the following code to check and try the connection again. This might help you
import socket
import time
def waitforssh():
s=socket.socket()
address=env.host_string
port=22
while True:
time.sleep(5)
try:
s.connect((address,port))
return
except Exception,e:
print "failed to connec to %s:%s %(address,port)
pass
insert the function call into your deploy task
def deploy():
waitforssh()
This should test the connection. If the port does not respond, it will wait 5 seconds and try again.
That could explain why your second attempt to connect works.