I'm trying to communicate with HiveServer2 via ruby TCPSocket. As per Thrift SASL spec, I send START message and then plain auth information.
Server returns COMPLETE status with an empty payload. It should return challenge as a payload but an empty string.
START = 0x01
OK = 0x02
COMPLETE = 0x05
auth = 'PLAIN'
header = [START, auth.length].pack('cl>')
auth_string = ['anonymous'].pack('u')
auth_message = "[LOGIN] \u0000 #{auth_string} \u0000 #{auth_string}"
auth_header = [OK, auth_message.length].pack('cl>')
socket = TCPSocket.new localhost, 10000
socket.write header + auth
socket.write auth_header + auth_message
socket.read(5).unpack('cl>')
=> [5,0]
HiveServer2 returns 5 status that is COMPLETE.
No further communication is possible via this socket as the server returns nothing anymore.
I suspect auth_message constructed in a wrong way or something else is wrong.
Can anyone suggest the way HiveServer2 will understand my requests?
Any help will be appreciated.
UPD: Thrift SASL spec
UPD2: Solved! STARTTLS block should look like following below:
START = 0x01
OK = 0x02
COMPLETE = 0x05
auth = 'PLAIN'
header = [START, auth.length].pack('cl>')
auth_message = "[ANONYMOUS]\u0000anonymous\u0000anonymous"
auth_header = [OK, auth_message.length].pack('cl>')
socket = TCPSocket.new localhost, 10000
socket.write header + auth
socket.write auth_header + auth_message
socket.read(5).unpack('cl>')
=> [5,0]
After COMPLETE status received from the server, I can use TCLIService::Client to communicate with the HiveServer2. Only one thing to notice:
All writes to the underlying transport must be prefixed by the 4-byte length of the payload data, followed by the payload. All reads from this transport
should read the 4-byte length word, then read the full quantity of bytes
specified by this length word.
Try to use thrift gem and consider https://github.com/dallasmarlow/hiveserver2 instead of Ruby sockets.
Related
I’m using SSL for reading data from various remote services over secure websockets as follows: I create the socket, embed it in the SSL context and add the socket to the reading list for Unix.select. When the socket fires, I use Ssl.read to get the data.
4 services are working well. And with one I get Ssl.Read_error.Error_syscall: error:00000000:lib(0):func(0):reason(0) after receiving each websocket frame (size ~5-6Kb). By the way, frames here are much bigger than on other services, but I’m not sure it’s the reason.
I ignore syscall errors (and most probably loose some data) because frames continue to arrive. Then, always after one minute I get Ssl.Read_error.Error_zero_return: error:00000000:lib(0):func(0):reason(0), which means the peer closed SSL socket for writing and I have to restart the process because no new data will be received from this socket.
Problem is perfectly reproducible. At the same time examples for this service and my own test implementation with Node.JS receive the data for hours without any problems.
I assume I do something wrong or setup socket/SSL too straightforward (see below).
Any help or ideas would be strongly appreciated.
let sock = Unix.socket PF_INET SOCK_STREAM 0 in
let laddr = Unix.inet_addr_of_string p.interface in
Unix.bind sock (ADDR_INET (laddr,0));
Unix.connect sock addr;
let (sock, res) =
let req = Bytes.of_string http_request in
if ssl then begin
Ssl.init ();
let ctx = create_context TLSv1_2 Client_context in
let sock = Ssl.embed_socket sock ctx in
Ssl.connect sock;
(SslSock sock, (write sock req 0 http_request_len))
end else
(UnixSock sock, (Unix.write sock req 0 http_request_len))
WireShark did the trick: this “bad” service sends two websocket frames in one tcp packet where second frame has zero payload length. Naturally, my Websocket implementation improperly handled frames with zero payload which lead to missing of Ping frames and closing of TCP connection by remote server.
At the interface level DCCP is like TCP: you connect and then send/receive bytes.
I was wondering it's possible to make dccp connections in twisted by just adapting the wrappers for tcp...
According to the sample code (below) what needs to be changed is:
at socket instantiation: use different parameters
before using the socket: set some options
Then everything else would be the same...
Hints: I've spotted addressFamily and socketType in the sources of twisted but I have no idea on how to cleanly set them in the protocol factory. Also the protocol number, the 3rd parameter, here IPPROTO_DCCP, is always keeped to default. I have no clue either on how to access the socket to call setsockopt
import socket
socket.DCCP_SOCKOPT_PACKET_SIZE = 1
socket.DCCP_SOCKOPT_SERVICE = 2
socket.SOCK_DCCP = 6
socket.IPPROTO_DCCP = 33
socket.SOL_DCCP = 269
packet_size = 256
address = (socket.gethostname(),12345)
# Create sockets
server,client = [socket.socket(socket.AF_INET, socket.SOCK_DCCP,
socket.IPPROTO_DCCP) for i in range(2)]
for s in (server,client):
s.setsockopt(socket.SOL_DCCP, socket.DCCP_SOCKOPT_PACKET_SIZE, packet_size)
s.setsockopt(socket.SOL_DCCP, socket.DCCP_SOCKOPT_SERVICE, True)
# Connect sockets
server.bind(address)
server.listen(1)
client.connect(address)
s,a = server.accept()
# Echo
while True:
client.send(raw_input("IN: "))
print "OUT:", s.recv(1024)
More about DCCP:
https://www.sjero.net/research/dccp/
https://wiki.linuxfoundation.org/networking/dccp
TL;DR: dccp is a protocol that provides congestion control (like tcp) without guaranteeing reliability or in-order delivery of data (like udp). The standard linux kernel implements dccp.
I've implemented an unsecured mosquitto broker which works fantastically to send large amount of data periodically (~200kb file once per minute) over port 1883.
Since i've implemented TLS, the broker seems to reject data >128kb automatically over port 8883 despite setting the message_size_limit = 0.
heres my mosquitto.conf:
listener 1883 localhost
listener 8883
certfile /etc/letsencrypt/live/example.com/cert.pem
cafile /etc/letsencrypt/live/example.com/chain.pem
keyfile /etc/letsencrypt/live/example.com/privkey.pem
And heres my script which is used to test the broker which works fine without TLS over 1883
client = mqtt.Client("test")
client.tls_set(certfile="./mqtt/cert.pem", keyfile="./mqtt/key.pem")
client.connect("example.com", 8883)
#publish file as zip
with open("./mqtt/20180319171000.gz", 'rb') as f:
byte_array = f.read()
m.update(byte_array)
file_hash = m.hexdigest()
payload_json = {'byte_array': byte_array, 'md5': file_hash}
client.publish("topic", pickle.dumps(payload_json), 0)
time.sleep(1)
client.disconnect()
Is there a limit on the payload size with TLS or is something wrong with my setting/script?
The problem here is that the MQTT Client loop is not being run.
When the payload is larger than can fit in a single TCP packet the call to client.publish() needs to queue up the rest of the message and this is then broken up into multiple packets and sent via the client loop.
The correct response is not to increase the keepalive period. There are 2 ways to solve this with the python Paho library.
First you can use the Publish class instead of the Client class. This includes a one function that handles all background tasks required to ensure the whole message is delivered.
import paho.mqtt.publish as publish
tls_opt = {
'certfile':"./mqtt/cert.pem",
'keyfile':"./mqtt/key.pem"
}
with open("./mqtt/20180319171000.gz", 'rb') as f:
byte_array = f.read()
m.update(byte_array)
file_hash = m.hexdigest()
payload_json = {'byte_array': byte_array, 'md5': file_hash}
publish.single("topic", payload=pickle.dumps(payload_json), qos=0, hostname="example.com", port=8883, tls=tls_opt)
Second is to start the network loop as follows:
client = mqtt.Client("test")
client.tls_set(certfile="./mqtt/cert.pem", keyfile="./mqtt/key.pem")
client.connect("example.com", 8883)
client.loop_start()
#publish file as zip
with open("./mqtt/20180319171000.gz", 'rb') as f:
byte_array = f.read()
m.update(byte_array)
file_hash = m.hexdigest()
payload_json = {'byte_array': byte_array, 'md5': file_hash}
client.publish("topic", pickle.dumps(payload_json), 0)
time.sleep(1)
client.loop_stop()
client.disconnect()
An old question, but I experienced the same issue with large messages (>500kb). My solution was to increase the keepalive on the client from (default) 60 to 300 sec.
This is probably related to timeout for TLS encrypton on large messages taking longer than keepalive.
Edit: Added python-code for connect:
client.connect(
host="example.com",
port=8883,
keepalive=300)
Update:
I found this question looking for answers to a problem that looked similar to mine, that is MQTT publish failed for large (> 500kb) paylods when using MQTT TLS. As #hardillb indicates in his answer, OP is missing client.loop_start(). This does not fix my problem, however.
keepalive should have no impact, but that is just not the case. Increasing the value definetely fixes the problem. My theory is that the broker failes the connection on timeout because it tries to PING the client, but the client refuses to respond withion keepalive because it is busy trying to encrypt the message. This is just a theory, though.
I've created some test code to illustrate the problem. I also included a "last will" to check if the connection is lost without a proper disconnect(), and it seems to fit my theory. Using too small keepalive definetely activates the last will on the broker, indicating a "timeout".
Increasing the keepalive does not activate "last will" on the broker.
Here is my code I used to test different keepalive values and payload sizes.
import paho.mqtt.client as mqtt_client
import time
from datetime import datetime
password = 'somepassword'
user = 'someuser'
address = 'somebroker.no'
connected = False
def on_connect(client, userdata, flags, rc):
global connected
connected = True
print("Connected!")
def on_disconnect(client, userdata, rc):
global connected
connected = False
print("Disconnected!")
client = mqtt_client.Client()
client.username_pw_set(user, password)
client.on_connect = on_connect
client.on_disconnect = on_disconnect
client.tls_set()
client.will_set(topic='tls_test/connected', payload='False', qos=0, retain=True)
client.connect(host=address, port=8883, keepalive=100)
client.loop_start()
while not connected:
time.sleep(1)
topic = 'tls_test/abc'
payload = 'a'*1000000
start = time.time()
print('Start: {}'.format(datetime.fromtimestamp(start).strftime('%H:%M:%S')))
result = client.publish(topic='tls_test/connected', payload='True', qos=0, retain=True)
result = client.publish(topic=topic, payload=payload)
if result.rc != 0:
print("MQTT Publish failed: {}".format(result.rc))
exit()
client.loop_stop()
client.disconnect()
stop = time.time()
print('Stop: {}, delta: {} sec'.format(datetime.fromtimestamp(stop).strftime('%H:%M:%S'), stop-start))
Usig the code above (keepalive=100), it sends 1000.000 bytes and tls_test/connected has the value True on the broker after finishing.
Data is transmitted successfully, The console output is:
python3 .\mqtt_tls.py
Connected!
Start: 10:51:16
Disconnected!
Stop: 10:53:01, delta: 105.57992386817932 sec
Reducing the keepalive (keepalive=10), transmission fails and tls_test/connected has the value False on the broker after finishing.
Data transmit fails, and the console output is:
python3 .\mqtt_tls.py
Connected!
Start: 11:08:23
Disconnected!
Disconnected!
Stop: 11:08:43, delta: 19.537118196487427 sec
Tailing /var/log/mosquitto/mosquitto.log on the broker gives the following error message:
1612346903: New client connected from x.x.x.x as xxx (c1, k10, u'someuser').
1612346930: Socket error on client xxx, disconnecting.
My conclusion is: keepalive does have an impact on large payloads when using TLS
I have a custom Protobuf-based protocol that I've implemented as an EventMachine protocol and I'd like to use it over a secure connection between the server and clients. Each time I send a message from a client to the server, I prepend the message with a 4-byte integer representing the size of the Protobuf serialized string to be sent such that the server knows how many bytes to read off the wire before parsing the data back into a Protobuf message.
I'm calling start_tls in the post_init callback method in both the client and server protocol handlers, with the one in the server handler being passed the server's private key and certificate. There seems to be no errors happening at this stage, based on log messages I'm printing out.
Where I get into trouble is when I begin parsing data in the receive_data callback in the server's handler code... I read 4 bytes of data off the wire and unpack it to an integer, but the integer that gets unpacked is not the same integer I send from the client (i.e. I'm sending 17, but receiving 134222349).
Note that this does not happen when I don't use TLS... everything works fine if I remove the start_tls calls in both the client and server code.
Is it the case that SSL/TLS data gets passed to the receive_data callback when TLS is used? If so, how do I know when data from the client begins? I can't seem to find any example code that discusses this use case...
OK, so via a cross-post to the EventMachine Google Group I figured out what my problem was here. Essentially, I was trying to send data from the client to the server before the TLS handshake was done because I wasn't waiting until the ssl_handshake_completed callback was called.
Here's the code I got to work, just in case anyone comes across this post in the future. :)
Handler code for the server-side:
require 'eventmachine'
class ServerHandler < EM::Connection
def post_init
start_tls :private_key_file => 'server.key', :cert_chain_file => 'server.crt', :verify_peer => false
end
def receive_data(data)
puts "Received data in server: #{data}"
send_data(data)
end
end
Handler code for the client-side:
require 'eventmachine'
class ClientHandler < EM::Connection
def connection_completed
start_tls
end
def receive_data(data)
puts "Received data in client: #{data}"
end
def ssl_handshake_completed
send_data('Hello World! - 12345')
end
end
Code to start server:
EventMachine.run do
puts 'Starting server...'
EventMachine.start_server('127.0.0.1', 45123, ServerHandler)
end
Code to start client:
EventMachine.run do
puts 'Starting client...'
EventMachine.connect('127.0.0.1', 45123, ClientHandler)
end
I have a rails application running on a server where I added some iptables rules to improve security. Now Omniauth callbacks stopped working. Every time I try to log in with any provider I get this error into my application log
Errno::ENETUNREACH (Network is unreachable - connect(2))
And this dropped package gets logged into syslog
IN=eth0 OUT= MAC=40:40:ea:31:ac:8d:64:00:f1:cd:1f:7f:08:00 SRC=66.220.147.99 DST=my_ip LEN=56 TOS=0x00 PREC=0x00 TTL=88 ID=0 DF PROTO=TCP SPT=443 DPT=37035 WINDOW=14480 RES=0x00 ACK SYN URGP=0
Can someone tell me what that entry in my syslog is about and what kind of iptables rule is needed to allow it.
If needed I could add also the rules I have applied this far.
EDIT:
The syslog line was incorrect, so I replaced it.
The answer to my original question found from http://lists.debian.org/debian-user/2002/07/msg01187.html
IN = interface the packet came in
OUT = interface used for sending the packet
MAC = MAC address for source and destination
SRC = IP of the sender
DST = IP of the receiver
LEN = Length of the packet
TOS = ?
PREC = Precedence
TTL = Time to live (hop count of the package)
ID = Packet ID number
DF = Don't fragment bit
PROTO = The protocol
SPT = Sender port
DPT = Receiving port
WINDOW = ?
RES = Received bits
And then some TCP flags in the end of the row. Didn't yet dig the meaning of those.
ACK = ?
SYN = ?
URGP = ?