ICE protocol was updated in RFC 8445. ICE lite predates that RFC. The details on ICE Lite in RFC 8445 is provided in Appendix A. It is very sketchy. However, way back in 2007, an attempt was made to formalize what ICE Lite was. That was in this draft RFC. It is fairly descriptive but some of the statements conflict those in RFC 8445. For example, RFC 8445 does allow for both peers to be ICE LITE while the draft document suggests otherwise.
Can someone point out the exceptions or corrections in the draft RFC on ICE LITE which will make it compatible with RFC 8445? Or point to a document that describes ICE LITE in more detail that the description in RFC 8445?
I am NOT using libnice but as there is no relevant tag on ICE, I used libnice hoping that users of libnice will have some info.
pion/ice has an option for ICE Lite. I did some things via trial and error, but here is what I learned along the way.
From RFC 8445 6.1.1. Determining Role
Both lite: The initiating agent that started the ICE processing MUST
take the controlling role, and the other MUST take the controlled
role. In this case, no connectivity checks are ever sent.
Rather, once the candidates are exchanged, each agent performs the
processing described in Section 8 without connectivity checks. It
is possible that both agents will believe they are controlled or
controlling. In the latter case, the conflict is resolved through
glare detection capabilities in the signaling protocol enabling
the candidate exchange. The state of ICE processing for each data
stream is considered to be Running, and the state of ICE overall
is Running.
I haven't found an extensive single place to learn about ICE Lite. But you can look at how pion/ice behaves, and happy to answer more individual questions!
Related
Currently I am studying about how exactly the bluetooth works and I came across the terms active and passive fingerprinting techniques. Could anybody explain these terms to me or give me some pointers to literature?
I don't know enough about Bluetooth to give a specific answer about fingerprinting it, however, your question seems general, so I'll try giving a general answer.
In general, passive techniques are techniques that don't require active participation in the network. So they can be done without sending packets or frames, just by listening. This means that passive techniques are very hard to detect, but are more limited.
In the case of Bluetooth, passive fingerprinting can probably be done by listening to beacon frames, or perhaps a conversation between two of more devices.
Active fingerprinting, on the other hand, requires you to send frames into the network, to device(s) being fingerprinted, and listening to the response(s).
UDP has one good feature - it is connectionless. But it has many bad features - packets can be lost, arrive multiple times, there is no packet sequence - packet 2 can arrive faster than 1. How to keep good and remove bad?. Is there any good implementations that provide reliable transport protocol on top of udp so that we are still conectionless but without mentioned problems. One example of what can be done with it is mosh.
What you describe as bad isn't really bad depending on the context.
For example UDP is used a lot in realtime streaming, delivery confirmation and resending is useless in this context.
That being said there are e few implementations that you might want to look at:
ENet (http://enet.bespin.org/)
RUDP (https://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol)
UDT (https://en.wikipedia.org/wiki/UDP-based_Data_Transfer_Protocol)
I work in embedded context:
CoAP (https://en.wikipedia.org/wiki/Constrained_Application_Protocol) also implements a lot of these features, so its worth a look.
What is your reason for not choosing TCP?
I'm trying to evaluate different pub/sub messaging protocols on their ability to horizontally scale without producing unnecessary cross chatter.
My architecture will have NodeJS servers with web socket clients connected. I plan on using a consistent hashing based router to direct clients to servers based off of the topics they're interested in subscribing to. This would mean that for a given topic, only a subset of servers will have clients subscribing to that topic. Messages will then be published to a pub/sub broker, which would be responsible for fanning out that data to servers that have subscribers.
The situation I want to avoid is one in which every broker receives every request, and the network becomes saturated. This is a clear issue with scaling Redis Pub/Sub. Adding servers shouldn't create an n squares' problem.
The number of clients on the pub/sub protocol would be the number of servers. Ideally, each server would be able to have a local broker to fan out data efficiently to multiple NodeJS processes, as to avoid unnecessary network bandwidth. In most cases, for a given topic, all subscribers would be on that same server.
What pub/sub protocols offer this sort of topic based data propagation?
The protocols I'm evaluating are: MQTT, RabbitMQ, ZMQ, nanomsg. This isn't inclusive, and SAAS options are acceptable.
The quality assurance constraints are easy. At most once, or at least once are both adequate. Acknowledgment isn't important. Event order isn't important. We're looking for fire and forget, with an emphasis on horizontal scalability.
First, let me address a risk of mis-understanding
In many cases, similar words do not mean the same thing. The more the abbreviations.
Having that said, let me review a PUB/SUB terminus technicus.
Martin SUSTRIK's and Pieter HINTJENS' team in imatix & 250bpm have developed a few smart messaging frameworks over the past decades, so these guys know a lot about the architecture benefits, constraints and implementation compromises.
That said helps me to state that these fathers, who introduced grounds of the modern messaging, do not consider PUB/SUB to be a protocol.
It is, at least in nanomsg & ZeroMQ, rather a smart Distributed Scaleability-focused Formal Communication Pattern -- i.e. a behaviour emulated by all involved parties.
Both ZeroMQ and nanomsg are broker-less.
In this sense, asking "what protocols" does not have solid grounds.
Let's start from the "data propagation" side
In initial ZeroMQ implementations PUB had no other choice but distribute all messages to all SUB-s that were in a connected-state. Pieter HINTJENS explained numerous times this decision that actual subscription-based filtering was performed on SUB-side ( distributed in 1:all-connected manner ).
It came much later to implement PUB-side subscription based filtering and you may check revisions history to find since which version this started to avoid 1:all-connected broadcasts of data.
Similarly, you may check the nanomsg remarks from Martin SUSTRIK, who gave many indepth posts on performance improvements designed in his fabulous nanomsg project.
Scaleability as a priority No.1
If Scaleability is the focus of your post and if it were a serious Project, my question number one would be what is the quantitative metric for comparing feasible candidates according to such Project goal - i.e. what is the feasibility translated into a utility function to score candidates to compare all the parallel attributes your Project is interested in?
Are there any software libraries and/or wireless drivers that make it possible to turn a sequence of binary data into a wireless packet in the air? For example, if someone used Airpcap / Wireshark to capture a series of interesting packets, is there some library that can be fed that binary data in order to turn it back into 802.11 wireless packets for testing purposes? If so, can we then also make minor alterations to the values of the packet in order to generate a wide variety of testing scenarios? Is anyone aware of tools/libraries that enable or assist this scenario?
While there are many tools around that may be used to replay and send data, one of the most advanced and flexible one is:
TCPReplay
http://tcpreplay.synfin.net/
You can edit the packets at different levels and then to send them.
Excerpt from their website:
... You can ... classify traffic as client or server, rewrite Layer 2, 3 and 4 headers and finally replay ...
There are some alternatives such as bitTwist and the WinPcap library.
Most Wi-fi tools are set up for cracking networks or stealing data so you might be able to re-purpose an existing attacker's tool or library (like ettercap or aircrack-ng) for your testing purposes. Most tools I've encountered focus on ethernet, tcp and http.
The following list of software might merit further investigation:
TCPReplay
Bit-Twist
aircrack-ng suite
Nemesis
Packet Editor
Bit-Twist and TCPReplay are your best bet if you're willing to compromise for something higher up in the protocol stack.
Wikipedia on TFTP states:
Windows 2008 introduced pipelined TFTP
Its aim is to enable good throughput over high latency links. Unfortunately no reference is given.
The only other reference I found is Bazootftp mention pipelining-support.
So how is pipelining implemented? Is it negotiated per RFC 2347?
Is it possible to do pipelining, if only one side supports it (eg. via some ACK-tricks)?
I've seen Bazootftp add another packet-type, to signal the end of the stream.
Is Bazootftp's pipelining the same as in Windows?
And I haven't exactly understand, how the windowing works, esp. with lost packets.
Any hints appreciated.
Pipelining TFTP if achived by the use of the negotiated variable "windowsize". The term pipelined is really not the best one.
you can read more here:
http://www.vercot.com/~serva/advanced/TFTP.html
and it seems probably it's going to be an RFC
http://datatracker.ietf.org/doc/draft-masotta-tftpexts-windowsize-opt/
windowsize negotiation requieres the agreement on both sides but Serva (1st link) does some tricks for getting something similar against a regular RFC-1350 TFTP client.