What this iptables log entry is about? - ruby-on-rails-3

I have a rails application running on a server where I added some iptables rules to improve security. Now Omniauth callbacks stopped working. Every time I try to log in with any provider I get this error into my application log
Errno::ENETUNREACH (Network is unreachable - connect(2))
And this dropped package gets logged into syslog
IN=eth0 OUT= MAC=40:40:ea:31:ac:8d:64:00:f1:cd:1f:7f:08:00 SRC=66.220.147.99 DST=my_ip LEN=56 TOS=0x00 PREC=0x00 TTL=88 ID=0 DF PROTO=TCP SPT=443 DPT=37035 WINDOW=14480 RES=0x00 ACK SYN URGP=0
Can someone tell me what that entry in my syslog is about and what kind of iptables rule is needed to allow it.
If needed I could add also the rules I have applied this far.
EDIT:
The syslog line was incorrect, so I replaced it.

The answer to my original question found from http://lists.debian.org/debian-user/2002/07/msg01187.html
IN = interface the packet came in
OUT = interface used for sending the packet
MAC = MAC address for source and destination
SRC = IP of the sender
DST = IP of the receiver
LEN = Length of the packet
TOS = ?
PREC = Precedence
TTL = Time to live (hop count of the package)
ID = Packet ID number
DF = Don't fragment bit
PROTO = The protocol
SPT = Sender port
DPT = Receiving port
WINDOW = ?
RES = Received bits
And then some TCP flags in the end of the row. Didn't yet dig the meaning of those.
ACK = ?
SYN = ?
URGP = ?

Related

How to use DCCP with twisted ? (Datagram Congestion Control Protocol)

At the interface level DCCP is like TCP: you connect and then send/receive bytes.
I was wondering it's possible to make dccp connections in twisted by just adapting the wrappers for tcp...
According to the sample code (below) what needs to be changed is:
at socket instantiation: use different parameters
before using the socket: set some options
Then everything else would be the same...
Hints: I've spotted addressFamily and socketType in the sources of twisted but I have no idea on how to cleanly set them in the protocol factory. Also the protocol number, the 3rd parameter, here IPPROTO_DCCP, is always keeped to default. I have no clue either on how to access the socket to call setsockopt
import socket
socket.DCCP_SOCKOPT_PACKET_SIZE = 1
socket.DCCP_SOCKOPT_SERVICE = 2
socket.SOCK_DCCP = 6
socket.IPPROTO_DCCP = 33
socket.SOL_DCCP = 269
packet_size = 256
address = (socket.gethostname(),12345)
# Create sockets
server,client = [socket.socket(socket.AF_INET, socket.SOCK_DCCP,
socket.IPPROTO_DCCP) for i in range(2)]
for s in (server,client):
s.setsockopt(socket.SOL_DCCP, socket.DCCP_SOCKOPT_PACKET_SIZE, packet_size)
s.setsockopt(socket.SOL_DCCP, socket.DCCP_SOCKOPT_SERVICE, True)
# Connect sockets
server.bind(address)
server.listen(1)
client.connect(address)
s,a = server.accept()
# Echo
while True:
client.send(raw_input("IN: "))
print "OUT:", s.recv(1024)
More about DCCP:
https://www.sjero.net/research/dccp/
https://wiki.linuxfoundation.org/networking/dccp
TL;DR: dccp is a protocol that provides congestion control (like tcp) without guaranteeing reliability or in-order delivery of data (like udp). The standard linux kernel implements dccp.

Mosquitto TLS sets auto payload size limit

I've implemented an unsecured mosquitto broker which works fantastically to send large amount of data periodically (~200kb file once per minute) over port 1883.
Since i've implemented TLS, the broker seems to reject data >128kb automatically over port 8883 despite setting the message_size_limit = 0.
heres my mosquitto.conf:
listener 1883 localhost
listener 8883
certfile /etc/letsencrypt/live/example.com/cert.pem
cafile /etc/letsencrypt/live/example.com/chain.pem
keyfile /etc/letsencrypt/live/example.com/privkey.pem
And heres my script which is used to test the broker which works fine without TLS over 1883
client = mqtt.Client("test")
client.tls_set(certfile="./mqtt/cert.pem", keyfile="./mqtt/key.pem")
client.connect("example.com", 8883)
#publish file as zip
with open("./mqtt/20180319171000.gz", 'rb') as f:
byte_array = f.read()
m.update(byte_array)
file_hash = m.hexdigest()
payload_json = {'byte_array': byte_array, 'md5': file_hash}
client.publish("topic", pickle.dumps(payload_json), 0)
time.sleep(1)
client.disconnect()
Is there a limit on the payload size with TLS or is something wrong with my setting/script?
The problem here is that the MQTT Client loop is not being run.
When the payload is larger than can fit in a single TCP packet the call to client.publish() needs to queue up the rest of the message and this is then broken up into multiple packets and sent via the client loop.
The correct response is not to increase the keepalive period. There are 2 ways to solve this with the python Paho library.
First you can use the Publish class instead of the Client class. This includes a one function that handles all background tasks required to ensure the whole message is delivered.
import paho.mqtt.publish as publish
tls_opt = {
'certfile':"./mqtt/cert.pem",
'keyfile':"./mqtt/key.pem"
}
with open("./mqtt/20180319171000.gz", 'rb') as f:
byte_array = f.read()
m.update(byte_array)
file_hash = m.hexdigest()
payload_json = {'byte_array': byte_array, 'md5': file_hash}
publish.single("topic", payload=pickle.dumps(payload_json), qos=0, hostname="example.com", port=8883, tls=tls_opt)
Second is to start the network loop as follows:
client = mqtt.Client("test")
client.tls_set(certfile="./mqtt/cert.pem", keyfile="./mqtt/key.pem")
client.connect("example.com", 8883)
client.loop_start()
#publish file as zip
with open("./mqtt/20180319171000.gz", 'rb') as f:
byte_array = f.read()
m.update(byte_array)
file_hash = m.hexdigest()
payload_json = {'byte_array': byte_array, 'md5': file_hash}
client.publish("topic", pickle.dumps(payload_json), 0)
time.sleep(1)
client.loop_stop()
client.disconnect()
An old question, but I experienced the same issue with large messages (>500kb). My solution was to increase the keepalive on the client from (default) 60 to 300 sec.
This is probably related to timeout for TLS encrypton on large messages taking longer than keepalive.
Edit: Added python-code for connect:
client.connect(
host="example.com",
port=8883,
keepalive=300)
Update:
I found this question looking for answers to a problem that looked similar to mine, that is MQTT publish failed for large (> 500kb) paylods when using MQTT TLS. As #hardillb indicates in his answer, OP is missing client.loop_start(). This does not fix my problem, however.
keepalive should have no impact, but that is just not the case. Increasing the value definetely fixes the problem. My theory is that the broker failes the connection on timeout because it tries to PING the client, but the client refuses to respond withion keepalive because it is busy trying to encrypt the message. This is just a theory, though.
I've created some test code to illustrate the problem. I also included a "last will" to check if the connection is lost without a proper disconnect(), and it seems to fit my theory. Using too small keepalive definetely activates the last will on the broker, indicating a "timeout".
Increasing the keepalive does not activate "last will" on the broker.
Here is my code I used to test different keepalive values and payload sizes.
import paho.mqtt.client as mqtt_client
import time
from datetime import datetime
password = 'somepassword'
user = 'someuser'
address = 'somebroker.no'
connected = False
def on_connect(client, userdata, flags, rc):
global connected
connected = True
print("Connected!")
def on_disconnect(client, userdata, rc):
global connected
connected = False
print("Disconnected!")
client = mqtt_client.Client()
client.username_pw_set(user, password)
client.on_connect = on_connect
client.on_disconnect = on_disconnect
client.tls_set()
client.will_set(topic='tls_test/connected', payload='False', qos=0, retain=True)
client.connect(host=address, port=8883, keepalive=100)
client.loop_start()
while not connected:
time.sleep(1)
topic = 'tls_test/abc'
payload = 'a'*1000000
start = time.time()
print('Start: {}'.format(datetime.fromtimestamp(start).strftime('%H:%M:%S')))
result = client.publish(topic='tls_test/connected', payload='True', qos=0, retain=True)
result = client.publish(topic=topic, payload=payload)
if result.rc != 0:
print("MQTT Publish failed: {}".format(result.rc))
exit()
client.loop_stop()
client.disconnect()
stop = time.time()
print('Stop: {}, delta: {} sec'.format(datetime.fromtimestamp(stop).strftime('%H:%M:%S'), stop-start))
Usig the code above (keepalive=100), it sends 1000.000 bytes and tls_test/connected has the value True on the broker after finishing.
Data is transmitted successfully, The console output is:
python3 .\mqtt_tls.py
Connected!
Start: 10:51:16
Disconnected!
Stop: 10:53:01, delta: 105.57992386817932 sec
Reducing the keepalive (keepalive=10), transmission fails and tls_test/connected has the value False on the broker after finishing.
Data transmit fails, and the console output is:
python3 .\mqtt_tls.py
Connected!
Start: 11:08:23
Disconnected!
Disconnected!
Stop: 11:08:43, delta: 19.537118196487427 sec
Tailing /var/log/mosquitto/mosquitto.log on the broker gives the following error message:
1612346903: New client connected from x.x.x.x as xxx (c1, k10, u'someuser').
1612346930: Socket error on client xxx, disconnecting.
My conclusion is: keepalive does have an impact on large payloads when using TLS

How to communicate with HiveServer2 running NONE authentication

I'm trying to communicate with HiveServer2 via ruby TCPSocket. As per Thrift SASL spec, I send START message and then plain auth information.
Server returns COMPLETE status with an empty payload. It should return challenge as a payload but an empty string.
START = 0x01
OK = 0x02
COMPLETE = 0x05
auth = 'PLAIN'
header = [START, auth.length].pack('cl>')
auth_string = ['anonymous'].pack('u')
auth_message = "[LOGIN] \u0000 #{auth_string} \u0000 #{auth_string}"
auth_header = [OK, auth_message.length].pack('cl>')
socket = TCPSocket.new localhost, 10000
socket.write header + auth
socket.write auth_header + auth_message
socket.read(5).unpack('cl>')
=> [5,0]
HiveServer2 returns 5 status that is COMPLETE.
No further communication is possible via this socket as the server returns nothing anymore.
I suspect auth_message constructed in a wrong way or something else is wrong.
Can anyone suggest the way HiveServer2 will understand my requests?
Any help will be appreciated.
UPD: Thrift SASL spec
UPD2: Solved! STARTTLS block should look like following below:
START = 0x01
OK = 0x02
COMPLETE = 0x05
auth = 'PLAIN'
header = [START, auth.length].pack('cl>')
auth_message = "[ANONYMOUS]\u0000anonymous\u0000anonymous"
auth_header = [OK, auth_message.length].pack('cl>')
socket = TCPSocket.new localhost, 10000
socket.write header + auth
socket.write auth_header + auth_message
socket.read(5).unpack('cl>')
=> [5,0]
After COMPLETE status received from the server, I can use TCLIService::Client to communicate with the HiveServer2. Only one thing to notice:
All writes to the underlying transport must be prefixed by the 4-byte length of the payload data, followed by the payload. All reads from this transport
should read the 4-byte length word, then read the full quantity of bytes
specified by this length word.
Try to use thrift gem and consider https://github.com/dallasmarlow/hiveserver2 instead of Ruby sockets.

OpenSSL DTLSv1_listen: server cannot get a message from client

I have a huge problem! And I need your help! Please help me!
I have found an example of DTLS implementation in the Internet, it is called dtls_udp_echo.c.
And I have the following code in function which describes behavior of server:
memset(&client_addr, 0, sizeof(struct sockaddr_storage));
/* Create BIO */
bio = BIO_new_dgram(fd, BIO_NOCLOSE);
/* Set and activate timeouts */
timeout.tv_sec = 5;
timeout.tv_usec = 0;
BIO_ctrl(bio, BIO_CTRL_DGRAM_SET_RECV_TIMEOUT, 0, &timeout);
ssl = SSL_new(ctx);
cout << "ssl is" << ssl ;
printf("ssl is \n");
SSL_set_bio(ssl, bio, bio);
SSL_set_options(ssl, SSL_OP_COOKIE_EXCHANGE);
while (DTLSv1_listen(ssl, &client_addr) <= 0){
//printf("%d\n",DTLSv1_listen(ssl, &client_addr));
}
info = (struct pass_info*) malloc (sizeof(struct pass_info));
memcpy(&info->server_addr, &server_addr, sizeof(struct sockaddr_storage));
memcpy(&info->client_addr, &client_addr, sizeof(struct sockaddr_storage));
info->ssl = ssl;
if (pthread_create( &tid, NULL, connection_handle, info) != 0) {
perror("pthread_create");
exit(-1);
}
}
THREAD_cleanup();
I've created client and it've sent a message to server. Using TCPDUMP I can see that packet
60. 250026 IP (tos 0x0, ttl 64, id 59389, offset 0, flags [DF], proto UDP (17), length 104) 127.0.0.1.8001 > 127.0.0.1.8000: UDP, length 76
where:
127.0.0.1 port 8001 - client
127.0.0.1 port 8000 - server
But server seems to be blind and it does not sent a handshake back to client.
I believe addresses are correct because when I during experiments changed them client didn't manage to send a handshake to server and there was an error:
SSL_connect: Connection refused
error:00000000:lib(0):func(0):reason(0)
My openSSL's version is 1.0.0d
Thank you, friend for you try to help me!
It is hard to say exactly what your problem is, but a couple of ideas that might help you search.
Set message and info callbacks, info_cb and msg_cb are functions you have to provide:
SSL_set_info_callback(ssl, info_cb);
SSL_set_msg_callback(ssl, msg_cb);
Does DTLSv1_listen ever return? In that case, what does it return?
You can also call
SSL_state_string_long(ssl)
That returns a description of the current state of ssl.
If you are on Windows, the examples you refer to doesn't work since Windows does not handle multiple UDP sockets bound to the same address and port as expected by the examples. To work around that, please see http://www.net-snmp.org/wiki/index.php/DTLS_Implementation_Notes.

UDP malformed packets

I use C# program for client UDP application. Application listens for a connection, and then communicates.
Socket udpClient = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
udpClient.Bind(new IPEndPoint(IPAddress.Any, ListenPort));
udpClient.Blocking = true;
int count = 0;
while (count == 0) udpClient.ReceiveFrom(receiveBuffer, ref ePoint);
udpClient.SendTo(data, endPoint);
udpClient.ReceiveFrom(receiveBuffer, ref ep);
...
I use Wireshark to debug the application. The problem is that after sometime my application starts sending malformed STUN packets, and I think that because of that they get rejected by a router on the internet.
The question: is it possible to prevent sending malformed UDP/STUN packets?
When your application sends malformed UDP packets, it has a bug. The minimal fragment of your code has only one SendTo call. You can add a check function for the content/length of data.
BTW: UDP is connectionless. I would say, your application waits for a request or a kind of start command not for a connection.