How can I know if a website is CDN enabled? If it is using the Cloudfare CDN, is it possible to disable the cache for specific URL?
Best regards,
Kelvin.
You can use ping to check if a domain is under a CDN, for example:
$ ping chat.openai.com
Pinging chat.openai.com.cdn.cloudflare.net [104.18.3.161] with 32 bytes of data:
Reply from 104.18.3.161: bytes=32 time=133ms TTL=54
Reply from 104.18.3.161: bytes=32 time=129ms TTL=54
Reply from 104.18.3.161: bytes=32 time=131ms TTL=54
Reply from 104.18.3.161: bytes=32 time=129ms TTL=54
Ping statistics for 104.18.3.161:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 129ms, Maximum = 133ms, Average = 130ms
Here we found a chat.openai.com.cdn.cloudflare.net domain. Which means chat.openai.com is a CNAME. This method works in most cases
Related
I am trying to block tcp packets of a specific user/session after some threshold is reached.
Currently I am able to write a script that drops tcp packets.
#set_ev_cls(ofp_event.EventOFPSwitchFeatures, CONFIG_DISPATCHER)
def switch_features_handler(self, ev):
tcp_match = self.drop_tcp_packets_to_specfic_ip(parser)
self.add_flow_for_clear(datapath, 2, tcp_match)
def drop_tcp_packets_to_specfic_ip(self, parser):
tcp_match = parser.OFPMatch(eth_type=0x0800, ip_proto=6, ipv4_src=conpot_ip)
return tcp_match
Thanks.
You need to set some rule to match the packets flow.
After, you need to create an loop to get statistics about this rule.
Finally, you read each statistic and verify the number of packets. So, if the number of packets reach your threshold, you send the rule to block packets.
I've implemented an unsecured mosquitto broker which works fantastically to send large amount of data periodically (~200kb file once per minute) over port 1883.
Since i've implemented TLS, the broker seems to reject data >128kb automatically over port 8883 despite setting the message_size_limit = 0.
heres my mosquitto.conf:
listener 1883 localhost
listener 8883
certfile /etc/letsencrypt/live/example.com/cert.pem
cafile /etc/letsencrypt/live/example.com/chain.pem
keyfile /etc/letsencrypt/live/example.com/privkey.pem
And heres my script which is used to test the broker which works fine without TLS over 1883
client = mqtt.Client("test")
client.tls_set(certfile="./mqtt/cert.pem", keyfile="./mqtt/key.pem")
client.connect("example.com", 8883)
#publish file as zip
with open("./mqtt/20180319171000.gz", 'rb') as f:
byte_array = f.read()
m.update(byte_array)
file_hash = m.hexdigest()
payload_json = {'byte_array': byte_array, 'md5': file_hash}
client.publish("topic", pickle.dumps(payload_json), 0)
time.sleep(1)
client.disconnect()
Is there a limit on the payload size with TLS or is something wrong with my setting/script?
The problem here is that the MQTT Client loop is not being run.
When the payload is larger than can fit in a single TCP packet the call to client.publish() needs to queue up the rest of the message and this is then broken up into multiple packets and sent via the client loop.
The correct response is not to increase the keepalive period. There are 2 ways to solve this with the python Paho library.
First you can use the Publish class instead of the Client class. This includes a one function that handles all background tasks required to ensure the whole message is delivered.
import paho.mqtt.publish as publish
tls_opt = {
'certfile':"./mqtt/cert.pem",
'keyfile':"./mqtt/key.pem"
}
with open("./mqtt/20180319171000.gz", 'rb') as f:
byte_array = f.read()
m.update(byte_array)
file_hash = m.hexdigest()
payload_json = {'byte_array': byte_array, 'md5': file_hash}
publish.single("topic", payload=pickle.dumps(payload_json), qos=0, hostname="example.com", port=8883, tls=tls_opt)
Second is to start the network loop as follows:
client = mqtt.Client("test")
client.tls_set(certfile="./mqtt/cert.pem", keyfile="./mqtt/key.pem")
client.connect("example.com", 8883)
client.loop_start()
#publish file as zip
with open("./mqtt/20180319171000.gz", 'rb') as f:
byte_array = f.read()
m.update(byte_array)
file_hash = m.hexdigest()
payload_json = {'byte_array': byte_array, 'md5': file_hash}
client.publish("topic", pickle.dumps(payload_json), 0)
time.sleep(1)
client.loop_stop()
client.disconnect()
An old question, but I experienced the same issue with large messages (>500kb). My solution was to increase the keepalive on the client from (default) 60 to 300 sec.
This is probably related to timeout for TLS encrypton on large messages taking longer than keepalive.
Edit: Added python-code for connect:
client.connect(
host="example.com",
port=8883,
keepalive=300)
Update:
I found this question looking for answers to a problem that looked similar to mine, that is MQTT publish failed for large (> 500kb) paylods when using MQTT TLS. As #hardillb indicates in his answer, OP is missing client.loop_start(). This does not fix my problem, however.
keepalive should have no impact, but that is just not the case. Increasing the value definetely fixes the problem. My theory is that the broker failes the connection on timeout because it tries to PING the client, but the client refuses to respond withion keepalive because it is busy trying to encrypt the message. This is just a theory, though.
I've created some test code to illustrate the problem. I also included a "last will" to check if the connection is lost without a proper disconnect(), and it seems to fit my theory. Using too small keepalive definetely activates the last will on the broker, indicating a "timeout".
Increasing the keepalive does not activate "last will" on the broker.
Here is my code I used to test different keepalive values and payload sizes.
import paho.mqtt.client as mqtt_client
import time
from datetime import datetime
password = 'somepassword'
user = 'someuser'
address = 'somebroker.no'
connected = False
def on_connect(client, userdata, flags, rc):
global connected
connected = True
print("Connected!")
def on_disconnect(client, userdata, rc):
global connected
connected = False
print("Disconnected!")
client = mqtt_client.Client()
client.username_pw_set(user, password)
client.on_connect = on_connect
client.on_disconnect = on_disconnect
client.tls_set()
client.will_set(topic='tls_test/connected', payload='False', qos=0, retain=True)
client.connect(host=address, port=8883, keepalive=100)
client.loop_start()
while not connected:
time.sleep(1)
topic = 'tls_test/abc'
payload = 'a'*1000000
start = time.time()
print('Start: {}'.format(datetime.fromtimestamp(start).strftime('%H:%M:%S')))
result = client.publish(topic='tls_test/connected', payload='True', qos=0, retain=True)
result = client.publish(topic=topic, payload=payload)
if result.rc != 0:
print("MQTT Publish failed: {}".format(result.rc))
exit()
client.loop_stop()
client.disconnect()
stop = time.time()
print('Stop: {}, delta: {} sec'.format(datetime.fromtimestamp(stop).strftime('%H:%M:%S'), stop-start))
Usig the code above (keepalive=100), it sends 1000.000 bytes and tls_test/connected has the value True on the broker after finishing.
Data is transmitted successfully, The console output is:
python3 .\mqtt_tls.py
Connected!
Start: 10:51:16
Disconnected!
Stop: 10:53:01, delta: 105.57992386817932 sec
Reducing the keepalive (keepalive=10), transmission fails and tls_test/connected has the value False on the broker after finishing.
Data transmit fails, and the console output is:
python3 .\mqtt_tls.py
Connected!
Start: 11:08:23
Disconnected!
Disconnected!
Stop: 11:08:43, delta: 19.537118196487427 sec
Tailing /var/log/mosquitto/mosquitto.log on the broker gives the following error message:
1612346903: New client connected from x.x.x.x as xxx (c1, k10, u'someuser').
1612346930: Socket error on client xxx, disconnecting.
My conclusion is: keepalive does have an impact on large payloads when using TLS
I've found out that when I send an UDP datagram that gets fragmented (over 1452 bytes with MTU=1500), according to tcpdump, all the fragments are received on the target machine but then no message is received on the socket. This happens only with IPv6 addresses (both global and link-local), with IPv4 everything works as expected (and with non-fragmented datagrams as well).
As the datagram is discarded, there is this ICMP6 message:
05:10:59.887920 IP6 (hlim 64, next-header ICMPv6 (58) payload length: 69) 2620:52:0:105f::ffff:74 > 2620:52:0:105f::ffff:7b: [icmp6 sum ok] ICMP6, destination unreachable, length 69, unreachable port[|icmp6]
There's some repeated neighbour solicitation/advertisements going on and I see that it gets to the ARP cache (via ip neigh).
One minute later I get another ICMP6 messages saying that the fragment has timeout out.
What's wrong with the settings? The reassembled packet should not be discarded, when it can be delivered, right?
System is RHEL6 2.6.32-358.11.1.el6.x86_64
I'm trying to figure out what I need to send (client) in the NTP request package to retrieve a NTP package from the server. I'm working with the LWIP on Cortex M3, Stellaris LM3S6965
I understand that I will recieve a UDP header and then the NTP protocol with the different timestamps the remove the latency. I probable need to make an UDP header but what do I need to add as data?
wireshark image:
I hope you guys can help me.
The client request packet is the same as the server reply packet - just set the MODE bits in the first word to 3 (Client) to be sure.
Send the whole 48 byte packet to the server, it will reply with the same.
The simplest packet would be 0x1B followed by 47 zeroes. (Version = 3, mode = 3)
This is for starters: http://www.eecis.udel.edu/~mills/ntp/html/warp.html
Check this out in case you haven't yet: https://www.rfc-editor.org/rfc/rfc5905
Then look at this: http://wiki.wireshark.org/NTP and check out the sample pcap files that they have uploaded.
I am not sure if this helped, but I hope so.
I have coded an Arduino to connect to an NTP server using this code here,
http://www.instructables.com/id/Arduino-Internet-Time-Client/step2/Code/
Look at the method called getTimeAndDate, and sendNTPpacket.
That is the packet that is sent. This is setting up a buffer and shows binary (0b) and hex (0x) being set up in the 48 character buffer. The address is the NTP time server,
memset(packetBuffer, 0, NTP_PACKET_SIZE);
packetBuffer[0] = 0b11100011;
packetBuffer[1] = 0;
packetBuffer[2] = 6;
packetBuffer[3] = 0xEC;
packetBuffer[12] = 49;
packetBuffer[13] = 0x4E;
packetBuffer[14] = 49;
packetBuffer[15] = 52;
Udp.beginPacket(address, 123);
Udp.write(packetBuffer,NTP_PACKET_SIZE);
Udp.endPacket();
Here is what happens to the received packet,
Udp.read(packetBuffer,NTP_PACKET_SIZE); // read the packet into the buffer
unsigned long highWord, lowWord, epoch;
highWord = word(packetBuffer[40], packetBuffer[41]);
lowWord = word(packetBuffer[42], packetBuffer[43]);
epoch = highWord << 16 | lowWord;
epoch = epoch - 2208988800 + timeZoneOffset;
flag=1;
setTime(epoch);
setTime is part of the arduino time library, so the epoch should be the number of seconds since Jan 1, 1900 as suggested here (search for epoch),
https://en.wikipedia.org/wiki/Network_Time_Protocol
But in case you want a C# version, I found this here, compiled the code under the excepted answer and it works. This will likely make more sense to you, and does show the use of epoch 1/1/1900.
How to Query an NTP Server using C#?
Can easily see the similarity.
I have a rails application running on a server where I added some iptables rules to improve security. Now Omniauth callbacks stopped working. Every time I try to log in with any provider I get this error into my application log
Errno::ENETUNREACH (Network is unreachable - connect(2))
And this dropped package gets logged into syslog
IN=eth0 OUT= MAC=40:40:ea:31:ac:8d:64:00:f1:cd:1f:7f:08:00 SRC=66.220.147.99 DST=my_ip LEN=56 TOS=0x00 PREC=0x00 TTL=88 ID=0 DF PROTO=TCP SPT=443 DPT=37035 WINDOW=14480 RES=0x00 ACK SYN URGP=0
Can someone tell me what that entry in my syslog is about and what kind of iptables rule is needed to allow it.
If needed I could add also the rules I have applied this far.
EDIT:
The syslog line was incorrect, so I replaced it.
The answer to my original question found from http://lists.debian.org/debian-user/2002/07/msg01187.html
IN = interface the packet came in
OUT = interface used for sending the packet
MAC = MAC address for source and destination
SRC = IP of the sender
DST = IP of the receiver
LEN = Length of the packet
TOS = ?
PREC = Precedence
TTL = Time to live (hop count of the package)
ID = Packet ID number
DF = Don't fragment bit
PROTO = The protocol
SPT = Sender port
DPT = Receiving port
WINDOW = ?
RES = Received bits
And then some TCP flags in the end of the row. Didn't yet dig the meaning of those.
ACK = ?
SYN = ?
URGP = ?