As in how many connections can you have via webrtc before performance is affected? Will this technology advance to higher and higher possible connections? Thanks
Related
I have a rabbiMQ server (a cluster), so no problem on this side. But I must connect 1000 or 2000 clients. Each client App must have 1 connection and each client App is using multiple channels. Channels are not a problem. But connections seem to be limited (128 by default).
In such a case, how to you connect properly 2000 clients to RabbitMQ if you can't use 2000 connections? What are the good ways to do this? Is there some know patterns? (Knowing the 2000 clients must be connected all the time).
Many thanks in advance for your help and ideas!
I was reading and trying to find out about WebRTC's RTCDataChannel. As I understand Websockets are on top of TCP and have higher latency than SCTP that underlies WebRTC, when for example sending binary data between server and browser, that also could be 2 peers in WebRTC. When RTCDataChannel is set to unreliable mode (possible package loss but faster), its underlying SCTP becomes analogous to User Datagram Protocol (UDP) and when set to reliable it becomes TCP-like.
Is RTCDataChannel, when configured to be reliable (“TCP-like”), still faster then Websockets (TCP) and if so, how much faster?
I want to support around 100K mqtt connections using activemq. The activemq server is rejecting connections beyond 30K. How to tune activemq to support more number of connections.
I have tried the following
transportConnector name="mqtt" allowLinkStealing="true"
uri="mqtt+nio://0.0.0.0:1883?maximumConnections=100000&wireFormat.maxFrameSize=104857600&transport.defaultKeepAlive=60000&transport.closeAsync=false&useQueueForAccept=false
in activemq.xml but of no use.
I did some unix kernel tuning for number of open file fds to 100000.
Any one solved this problem ?
If you are going to handle > 100k connections I'd recommend looking into a dedicated MQTT broker instead of a multi-protocol message broker. You can see a list of MQTT brokers at the MQTT Github wiki.
ActiveMQ is afaik not designed for handling that much MQTT connections and is not optimized for MQTT because it's a multi-purpose Message Queue. If you want to stick with Apache software, perhaps using Apache Apollo can help although I don't know of any MQTT Apollo deployments with that size, but probably wort a try if you need a multi-protocol broker. Again, I'd recommend a dedicated MQTT broker for large amounts of MQTT connections.
You should definitely look into reactive and multi-threaded MQTT brokers if you want to handle that amount of connections and you should make sure that the MQTT broker you choose is known to work with your desired connection amount and load. HiveMQ for example is capable of handling >100k connections.
Full disclosure: I work for the company behind HiveMQ.
May I suggest you use Apache Apollo for MQTT connections when you have that number of concurrent sessions?
Apache Apollo is a sub project of ActiveMQ with the intent to make the broker scalable to a large number of connected clients. While ActiveMQ supports MQTT, it's not really optimized for this scenario.
JoramMQ (http://jorammq.com) is based on the Joram (http://joram.ow2.org) multi-protocol message broker and it supports more than 500K concurrent MQTT connections.
For anyone still trying to find a fitting MQTT broker for many connections here are my tests of multiple brokers (I should actually add ActiveMQ to the comparison). Performance is not the only thing to compare, but also clustering, monitoring, support, price. Final pick depeneds on your own needs.
Tests were conducted on a 32GB RAM, AMD 5800X, Ubuntu 18 PC.
50 000 MQTT clients connected with no ssl.
Clients subscribed to 4 channels & no messages were published.
Tests above 50k need multiple machines involved or some other tricks because of the 65k limit of outgoing sockets in the system.
Test results
RabbitMQ: 21GB of RAM and ~4 cores.
Mosquitto: 200Mb of RAM and ~0.05 core.
HiveMQ: 2.1GB of RAM and ~0.05 core.
EMQX: 1.4GB of RAM and ~1
core.
VerneMQ: 1.7GB of RAM and ~0.5 core.
If pricing is OK for you - HiveMQ lookes to me like the best broker.
If you are looking for something for free - check VerneMQ.
Can anyone tell be where to use the UDP protocol except live streaming of music/video? What are default usecases for UDP?
UDP is also good for broadcast, such as service discovery - finding that newly plugged in printer.
Also of note is that broadcast is anonymous, you don't need to specify target hosts, as such it can form the foundation of a convenient plug-and-play or high-availability network.
UDP is stateless and is good for applications that have large numbers of clients connecting to a server such as time servers or DNS. The fact that no connection has to established and maintained reduces the memory required by the server. There is no handshaking involved and so this reduces the traffic on the network. On the downside, if the information transferred requires multiple packets there is no transmission control to ensure that all packets arrive and in the correct order - but in games packets lost are probably better than late or disordered.
Anything else where you need performance but can survive if a packet gets lost along the way. Multiplayer games come to mind, for example.
A very common use case is DNS, since the overhead of creating a TCP connection would by far outweight the actual payload.
Additional use cases are NTP (network time service) and most video games.
I use UDP to add chat capabilities to our applications. No need to create a server. It is also useful to dispatch events to all users of our applications.
Just curious to know what your experiences with PNRP are? I have been using WCF to code up a peer to peer application using WCF.
I support 2 different setups, one using PNRP (i.e. no server) and another setup using a central server.
The central server approach is really fast over a LAN, peers can connect in around 0.5 - 2 seconds max. With PNRP though it sometimes takes up to a minute for peers to connect.
Is this normal? Is something wrong with my setup?
Ages ago I disabled teredo, and that caused PNRP to run very fast. But at the end of the day we will probably need to keep teredo in the mix to help with our application running over a WAN.
Thoughts?
I have used WCF's Peer Channels in an application requiring a Transient Elected Server State. This only needs to work on the LinkLocal cloud, the peers will all be on the same subnet. It does seem to take an inordinate amount of time to register with a cloud, and I am not sure how to confirm if a particular peer is currently registered in a particular cloud (using the abstraction of Peer Channels), but otherwise I like the convenience of it.