Connection failing between Raspberry Pi and Mac - rabbitmq

I have the following
a) a rabbitmq-server and Pika installed on a Mac Yosrmite machine.
I have a rabbutmq.config /usr/local/etc/rabbitmq/rabbitmq.config that I have the statement:
{loopback_users, []}
b) On the raspberry pi I have pika installed. I also installed the rabbtmq-server.
The send.py and receive.py, using pika, work locally on both machines.
The send from the Mac to the RPi works; but, the send from the RPi to the Mac fails as follows:
Traceback (most recent call last):
File "send.py", line 5, in
'192.168.1.4'))
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 339, in init
self._process_io_for_connection_setup()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 374, in _process_io_for_connection_setup
self._open_error_result.is_ready)
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 395, in _flush_output
raise exceptions.ConnectionClosed()
pika.exceptions.ConnectionClosed
========================================
My firewall is not enabled in the Mac.
There is no errors noted in the server log.
The send.py code is:
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
'192.168.1.4'))
channel = connection.channel()
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='',
routing_key='hello',
body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()
===========================
I am getting the activity on the port, in both machines:
sudo tcpdump port 5672
On RPi
15:09:05.394815 IP raspberrypi.home.40483 > ArnoldBileysMBP.home.amqp: Flags [S], seq 1428528534, win 29200, options [mss 1460,sackOK,TS val 1626318 ecr 0,nop,wscale 6], length 0
15:09:05.460755 IP ArnoldBileysMBP.home.amqp > raspberrypi.home.40483: Flags [R.], seq 0, ack 1428528535, win 0, length 0
On Mac
11:09:05.547322 IP raspberrypi.home.40483 > arnoldbileysmbp.home.amqp: Flags [S], seq 1428528534, win 29200, options [mss 1460,sackOK,TS val 1626318 ecr 0,nop,wscale 6], length 0
11:09:05.547362 IP arnoldbileysmbp.home.amqp > raspberrypi.home.40483: Flags [R.], seq 0, ack 1428528535, win 0, length 0
Any help would be deeply appreciated.

I found the fix at Open port 5672/tcp for access to RabbitMQ on Mac
I deleted the "NODE_IP_ADDRESS=127.0.0.1
" statement in the /usr/local/etc/rabbitmq/rabbitmq-env.conf fiile. This was in addition to the above fix I made to the accessing.

Related

SVN + Apache HTTPD - 500 Internal Server Error after several checkouts using Jenkins

Backstory:
We decided to migrate the SVN from On-Prem to Cloud.
Both servers are CentOS 7 and the SVN version On-Prem is 1.8.15 while on Cloud it's 1.8.19;
The access protocol changed from SVN (port 3690) to HTTPS (443), so the httpd setup is a novelty.
For the migration of the repository, I've tried doing a plain old 'rsync' between the servers to move the whole repository, and it worked since the functionality & all the revisions were there, however I still got the same error.
I thought it may be some kind of DB issue, so I then used the SVN-native 'svnadmin dump' and 'svnadmin load' commands to import the repository. The issue still persists.
I am using SVN accessed using HTTPS through Apache HTTPD.
Everything seems to work fine and all the functionality is there, but after several checkouts I start getting a 500 Internal Server Error.
Currently, the issue is caused by a Jenkins pipeline which checkouts from SVN, here is the outputted error:
ERROR: Failed to check out https://svn-repo/path/to/files
org.tmatesoft.svn.core.SVNException: svn: E175002: OPTIONS of '/path/to/files': 500 Internal Server Error (https://svn-repo)
svn: E175002: OPTIONS request failed on '/path/to/files'
The reason why I don't think it's a problem from the client (Jenkins) side at the moment is because the same error happened to me when checking out from my PC SVN client.
Here are the logs from HTTPD:
10.10.10.16 - - [17/Aug/2020:12:45:21 +0300] "OPTIONS /path/to/files HTTP/1.
1" 401 381
10.10.10.16 - user [17/Aug/2020:12:45:21 +0300] "OPTIONS /path/to/files HTTP/1.1" 500 541
As you can see, I receive a 401 before getting the 500, but as I said the checkouts occur one after the other so it couldn't have checked out something previously if the authorization was invalid (the permissions for the whole repo are identical, not path-based).
Side-note: The 401 occurs due to the definition of the WEBSAV protocol: it allows unauthenticated access so it will always try it first. If it gets back a 401 then it will send the credentials.
---- Progress Report ----
It's been brought to my attention that 'SVNAllowBulkUpdates On' could be the cause of this issue.
I tried running the pipeline both with 'Prefer' and 'Off', however that did not fix the issue.
Possibly related issue:
Large SVN checkout fails sporadically
I upgraded the SVN to version 1.10 successfully.
After upgrading and running the pipeline once more, I saw the following error in the SVN error log:
[Thu Oct 01 17:25:55.268333 2020] [dav:error] [pid 9465] [client 11.11.11.11:39580] Provider encountered an error while streaming a REPORT response. [500, #0]
[Thu Oct 01 17:25:55.268355 2020] [dav:error] [pid 9465] [client 11.11.11.11:39580] A failure occurred while driving the update report editor [500, #104]
[Thu Oct 01 17:25:55.268360 2020] [dav:error] [pid 9465] [client 11.11.11.11:39580] Connection reset by peer [500, #104]
Since the log points to a client-side issue, I started searching for configuration changes related to the client. Added the following in "~/.subversion/servers":
http-timeout = 259200
Source: https://confluence.atlassian.com/fishkb/svn-operations-taking-longer-than-an-hour-time-out-229180362.html
Unfortunately, this still did not help.
Later, I performed a 'tcpdump' on port 443 (tcpdump -nnS -i ens5 port 443) to see the headers of the incoming / outgoing packets. I ran the commands both on the Jenkins Slave and the SVN simultaneously, and found that at a certain point they stopped exchanging information for precisely one minute, after which the SVN sent a session termination packet to the Jenkins Slave which tried to later send information and abort the connection:
17:14:56.976631 IP SVN > Jenkins-Slave: Flags [.], ack 4264260017, win 235, options [nop,nop,TS val 1054806523 ecr 1461582542], length 0
17:14:56.976961 IP SVN > Jenkins-Slave: Flags [P.], seq 394455454:394456190, ack 4264260017, win 235, options [nop,nop,TS val 1054806523 ecr 1461582542], length 736
17:14:56.983612 IP Jenkins-Slave > SVN: Flags [P.], seq 4264260017:4264260557, ack 394456190, win 279, options [nop,nop,TS val 1461582631 ecr 1054806523], length 540
17:14:56.983688 IP Jenkins-Slave > SVN: Flags [P.], seq 4264260557:4264260693, ack 394456190, win 279, options [nop,nop,TS val 1461582631 ecr 1054806523], length 136
17:14:57.065351 IP SVN > Jenkins-Slave: Flags [.], ack 4264260693, win 252, options [nop,nop,TS val 1054806611 ecr 1461582631], length 0
17:15:57.124806 IP SVN > Jenkins-Slave: Flags [P.], seq 394456190:394457011, ack 4264260693, win 252, options [nop,nop,TS val 1054866672 ecr 1461582631], length 821
17:15:57.124832 IP SVN > Jenkins-Slave: Flags [F.], seq 394457011, ack 4264260693, win 252, options [nop,nop,TS val 1054866672 ecr 1461582631], length 0
17:15:57.125768 IP Jenkins-Slave > SVN: Flags [P.], seq 4264260693:4264260724, ack 394457012, win 300, options [nop,nop,TS val 1461642773 ecr 1054866672], length 31
17:15:57.125804 IP Jenkins-Slave > SVN: Flags [R.], seq 4264260724, ack 394457012, win 300, options [nop,nop,TS val 1461642774 ecr 1054866672], length 0
I obfuscated the IPs for obvious reasons.

Jedis sending QUIT request to redis server internally

My Jedis client sending the quit request internally causing the Redis server to close the connection.
This is unexpected behavior.
Below is the tcpdump of my host.
QUIT
17:12:17.702322 IP SOURCE_HOST.29039 > DEST_HOST.34250: Flags [P.], seq 1290557:1290562, ack 833190, win 65160, options [nop,nop,TS val 346069381 ecr 351399090], length 5
E..98\#.;...
%V.
W..qo....
.P.l.....U......
........+OK
17:12:17.702345 IP DEST_HOST.34250 > SOURCE_HOST.29039: Flags [.], ack 1290562, win 65366, options [nop,nop,TS val 351399092 ecr 346069381], length 0
E..4..#.#..l
I am doing following operations.
1. get()
2. set()
3. setex()
4. ping()
5. del()
6. keys()
I am using Jedis: 2.9.3 with Kotlin.
Operation: Get the resource(connection) from JedisPool and send a request(ex. get()).
Does anyone have an idea why Jedis sends QUIT request without calling it explicitly?
JedisPool uses JedisFactory.
destroyObject in JedisFactory calls quit.
destroyObject of JedisFactory actually overrides destroyObject of PooledObjectFactory.
GenericObjectPool uses PooledObjectFactory.
destroy in GenericObjectPool calls destroyObject of PooledObjectFactory.
(and so destroyObject of JedisFactory) which results in a calling of quit.
There are many cases where destroy of GenericObjectPool is get called.

ncclient: connecting to a NETCONF server

I want use the python library ncclient 0.6.6 with Python 2.7.15 to connect to a NETCONF server (netopeer2) and read out the running config.
I tried to follow the example from the manual, running this code in the console:
with manager.connect(host="*the IP adress*", port=*the port*, timeout=None, username="*user*", password="*pwd*") as m:
c = m.get_config(source='running').data_xml
with open("%s.xml" % host, 'w') as f:
f.write(c)
As written in the manual, I try to disable public-key authentification with allow_agent and look_for_keys as False. Unfortunately, this does not work properly, because I get the error message:
File "<stdin>", line 1, in <module>
File "/home/sisc/.local/lib/python2.7/site-packages/ncclient/manager.py", line 177, in connect
return connect_ssh(*args, **kwds)
File "/home/sisc/.local/lib/python2.7/site-packages/ncclient/manager.py", line 143, in connect_ssh
session.connect(*args, **kwds)
File "/home/sisc/.local/lib/python2.7/site-packages/ncclient/transport/ssh.py", line 481, in connect
raise SSHUnknownHostError(known_hosts_lookup, fingerprint)
ncclient.transport.errors.SSHUnknownHostError: Unknown host key [e3:8d:35:a9:43:f9:3c:8a:f4:d3:88:5b:a9:36:93:59] for [[192.168.56.2]:1831]
I do not get why it still complains about the unknown host key, even though I explicitly disabled public-key authentification.
The netopeer NETCONF server is definitely running, for I get a "Hello" Message as soon as I try to SSH into it from out of the terminal.
Did I miss something?
m = manager.connect(host="172.17.0.2", port=830, username="netconf", password="netconf", hostkey_verify=False)
Did the trick. Hostkey_verify has to be false.

Ssh client.py not working, showing connection error

My config file:
Host server
User new_user
HostName 10.0.1.193
Port 55555
LocalForward 3000 10.0.1.193:6000
IdentityFile ~/.ssh/server
Client.py
import xmlrpclib
s = xmlrpclib.ServerProxy('http://localhost:3000')
print s.pow(2,3) # Returns 2**3 = 8
print s.add(2,3) # Returns 5
print s.div(5,2) # Returns 5//2 = 2
# Print list of available methods
print s.system.listMethods()
Server.py
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
# Restrict to a particular path.
class RequestHandler(SimpleXMLRPCRequestHandler):
rpc_paths = ('/RPC2',)
# Create server
server = SimpleXMLRPCServer(("localhost", 6000),
requestHandler=RequestHandler)
server.register_introspection_functions()
# Register pow() function; this will use the value of
# pow.__name__ as the name, which is just 'pow'.
server.register_function(pow)
# Register a function under a different name
def adder_function(x,y):
return x + y
server.register_function(adder_function, 'add')
# Register an instance; all the methods of the instance are
# published as XML-RPC methods (in this case, just 'div').
class MyFuncs:
def div(self, x, y):
return x // y
server.register_instance(MyFuncs())
# Run the server's main loop
server.serve_forever()
My server.py is running fine, but when I run my client.py, it gives the following error:
Traceback (most recent call last):
File "client.py", line 4, in <module>
print s.pow(2,3) # Returns 2**3 = 8
File "/usr/lib/python2.7/xmlrpclib.py", line 1224, in __call__
return self.__send(self.__name, args)
File "/usr/lib/python2.7/xmlrpclib.py", line 1578, in __request
verbose=self.__verbose
File "/usr/lib/python2.7/xmlrpclib.py", line 1264, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/lib/python2.7/xmlrpclib.py", line 1292, in single_request
self.send_content(h, request_body)
File "/usr/lib/python2.7/xmlrpclib.py", line 1439, in send_content
connection.endheaders(request_body)
File "/usr/lib/python2.7/httplib.py", line 954, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 814, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 776, in send
self.connect()
File "/usr/lib/python2.7/httplib.py", line 757, in connect
self.timeout, self.source_address)
File "/usr/lib/python2.7/socket.py", line 571, in create_connection
raise err
socket.error: [Errno 111] Connection refused
I have checked that my ssh if working and I can ssh into remote server with the given configuration i.e.
ssh server
works find. Can anyone explain what might be going wrong?
Your server runs and perhaps it does not complain, but this does not mean it "runs correctly" or more pointedly it doesn't mean the server is in a working state that the client expects.
The above is somewhat cryptic for a reason: something unknown has gone wrong, and even though you don't know yet what's broken, you want to start testing things you know should work and verify they are in fact working. This is a useful debugging skill even if the error is meaningless to you.
In this case, the client error message is "connection refused", meaning "refused [at the server]".
Try this:
on your "client" PC in a Terminal/DOS window, run:
telnet [your server ip] [your server port]
You should expect the same error - a connection refused. Perhaps the server is not actually opening the port. Or perhaps the server opened the port, but you can not see it remotely on another host due to a firewall on the server.
Also, running both client and server code on the same host can sometime reveal more clues (it should work but if it doesn't then there's maybe more than 1 problem).

tftp retry timeout exceeded

My issue is retry count exceeds when I download kernel image to Econa processor board (Econa is ARM based processor) via TFTP as shown below
CNS3000 # tftp 0x4000000 bootpImage.cns3420.uclibc
MAC PORT 0 : Initialize bcm53115M
MAC PORT 2 : Initialize RTL8211
TFTP from server 192.168.0.219; our IP address is 192.168.0.112
Filename 'bootpImage.cns3420.uclibc'.
Load address: 0x4000000
Loading: T T T T T T T T T T
Retry count exceeded; starting again
Following are the points which may help you in finding the cause of this error.
Ping response is OK
CNS3000 # ping 192.168.0.219
MAC PORT 0 : Initialize bcm53115M
MAC PORT 2 : Initialize RTL8211
host 192.168.0.219 is alive
When I tried to verify TFTP is running, I tried as shown below. It seems TFTP server is working. I placed a small file in /tftpboot:
# echo "Hello, embedded world" > /tftpboot/hello.txt"
Then I did localhost
# tftp localhost
tftp> get hello.txt
Received 23 bytes in 0.1 seconds
tftp> quit
Please note that there is no firewall or SELinux on my machine.
Please verify location of these files are OK. I have placed kernel image file bootpImage.cns3420.uclibc in /tftpbootTFTP service file is located in /etc/xinetd.d/tftp.
My TFTP service file is:
service tftp
{
socket_type =dgram
protocol=udp
wait=yes
user=root
server=/usr/sbin/in.tftpd
server_args=-s /tftpboot -b 512
disable=no
per_source=11
cps=100 2
flags=ipv4
}
printenv response in U-boot is:
CNS3000 # printenv
bootargs=root=/dev/mtdblock0 mem=256M console=ttyS0
baudrate=38400
ethaddr=00:53:43:4F:54:54
netmask=255.255.0.0
tftp_bsize=512
udp_frag_size=512
mmc_init=mmcinit
loading=fatload mmc 0 0x4000000 bootpimage-82511
running=go 0x4000000
bootcmd=run mmc_init;run loading;run running
serverip=192.168.0.219
ipaddr=192.168.0.112
bootdelay=5
port=1
bootfile=/tftpboot/bootpImage.cns3420.uclibcl
stdin=serial
stdout=serial
stderr=serial
verify=n
Environment size: 437/4092 bytes
Regards
Waqas
Loading: T T T T T T T T T T
Means there is no transfer at all; this can be caused by wrong interface setting i.e.
u-boot is configured for 100Mbit full duplex, and you try to connect via half duplex or 10Mbit (or some mix of it). Another point is the MTU size, should be 1500 (u-boot cannot handle packet fragmentation)
Hint for windows/vmware users:
tftp timeouts from u-boot are caused by windows ip-forwarding.
1) If you have a home network : switch it of.
2) You are running Routing and Remote Access service : shut down service
3) check registry for ip forwarding:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\IPEnableRouter
set value to 0 (and maybe reboot)