I'm using RestComm SMSC Gateway 7.1.71 and I'm asking about how to limit throughput on ss7 networks (SMS/Second)
Where can I set the limit? By analogy with the smpp connector (RateLimitPerSecond)
Thanks in advance.
did you check the documentation ? See http://documentation.telestax.com/core/smsc/SMSC_Gateway_Admin_Guide.html#_esme_settings_create_cli for SMPP and http://documentation.telestax.com/core/ss7/SS7_Stack_User_Guide.html#_technical_notes for SS7 ?
Related
I'm newbie to DDS/Fast-RTPS.
Based on my understanding, the discovery is LAN-based. It failed to discover a node which is not in the same LAN. is it correct?
I'm wondering if we can use fast-rtps to communication across networks?
ps. let's ignore NAT/Firewall issues. Assuming we have a IP/TCP full reachable network environment.
DDS uses Multicast UDP. If your switches and other network infrastructure is set to swallow Multicast packets, or if the TTL is set too low, then the default discovery implementation of DDS will not complete/be seen.
You can up the TTL on your infrastructure, or you can tell the DDS libraries to target specific addresses (see the documentation for your provider's libraries).
I have a javascript SPA application that needs to support a user being offline for brief periods of time. I'm considering using actioncable for broadcasting changes the client may not be aware of.
If a websocket connection is lost for a brief amount of time, and then reconnected: will the client receive messages which were broadcast while they were offline?
Yes. Action cable will trigger a reconnect when the client gains access to the Internet.
You can test this your self by logging connections on your server and your client, then taking the client offline and reconnecting.
Hope this helps.
From the guide:
Broadcastings are purely an online queue and time dependent. If a consumer is not streaming (subscribed to a given channel), they'll not get the broadcast should they connect later.
So no, the client will not receive messages sent when they were offline.
I'm trying to setup a call between webRTC based client (olympus) and a standard one (x-lite i.e.).
The call is failing (480). I believe it is because of SDP negotiation failed.
Currently I use standard telestax mediaserver setup.
Can restcomm be configured in a way, that it transcodes the stream (and modifies codec negotiation), so webRTC based clients can call the standard sip ones ?
Thank you very much in advance.
Hubert
I want to support around 100K mqtt connections using activemq. The activemq server is rejecting connections beyond 30K. How to tune activemq to support more number of connections.
I have tried the following
transportConnector name="mqtt" allowLinkStealing="true"
uri="mqtt+nio://0.0.0.0:1883?maximumConnections=100000&wireFormat.maxFrameSize=104857600&transport.defaultKeepAlive=60000&transport.closeAsync=false&useQueueForAccept=false
in activemq.xml but of no use.
I did some unix kernel tuning for number of open file fds to 100000.
Any one solved this problem ?
If you are going to handle > 100k connections I'd recommend looking into a dedicated MQTT broker instead of a multi-protocol message broker. You can see a list of MQTT brokers at the MQTT Github wiki.
ActiveMQ is afaik not designed for handling that much MQTT connections and is not optimized for MQTT because it's a multi-purpose Message Queue. If you want to stick with Apache software, perhaps using Apache Apollo can help although I don't know of any MQTT Apollo deployments with that size, but probably wort a try if you need a multi-protocol broker. Again, I'd recommend a dedicated MQTT broker for large amounts of MQTT connections.
You should definitely look into reactive and multi-threaded MQTT brokers if you want to handle that amount of connections and you should make sure that the MQTT broker you choose is known to work with your desired connection amount and load. HiveMQ for example is capable of handling >100k connections.
Full disclosure: I work for the company behind HiveMQ.
May I suggest you use Apache Apollo for MQTT connections when you have that number of concurrent sessions?
Apache Apollo is a sub project of ActiveMQ with the intent to make the broker scalable to a large number of connected clients. While ActiveMQ supports MQTT, it's not really optimized for this scenario.
JoramMQ (http://jorammq.com) is based on the Joram (http://joram.ow2.org) multi-protocol message broker and it supports more than 500K concurrent MQTT connections.
For anyone still trying to find a fitting MQTT broker for many connections here are my tests of multiple brokers (I should actually add ActiveMQ to the comparison). Performance is not the only thing to compare, but also clustering, monitoring, support, price. Final pick depeneds on your own needs.
Tests were conducted on a 32GB RAM, AMD 5800X, Ubuntu 18 PC.
50 000 MQTT clients connected with no ssl.
Clients subscribed to 4 channels & no messages were published.
Tests above 50k need multiple machines involved or some other tricks because of the 65k limit of outgoing sockets in the system.
Test results
RabbitMQ: 21GB of RAM and ~4 cores.
Mosquitto: 200Mb of RAM and ~0.05 core.
HiveMQ: 2.1GB of RAM and ~0.05 core.
EMQX: 1.4GB of RAM and ~1
core.
VerneMQ: 1.7GB of RAM and ~0.5 core.
If pricing is OK for you - HiveMQ lookes to me like the best broker.
If you are looking for something for free - check VerneMQ.
Can anyone recommend a good Load Balancer for TCP messages sent to a Netty Server.
Messages are encoded/decoded using Google Protobuf.
You can take a look at Bruno de carvalho's Netty load balancer which will give you a perspective to the issue.