Multiple SMPP Session on Same Bind (System ID) - smpp

Assuming I have an ESME binding and pushing bulk SMS to an SMSC, what is the advantage of having multiple (say 2 or 3) tcp/ip sessions to the same SMSC (same IP/Port & sys-ID), if the SMSC has specific fixed send window (say total of 200 for the ESME), regardless number of sessions?
If upon the implementation above at one instance, session no1 is removed, what happens to the packets that session was expecting to receive from SMSC, would remote SMSC re-route them to other session since ip, port & system ID are same?
I read this too already:
Multiple SMPP Sessions

You're asking about multiple bindings from an ESME to an SMSC.Most of the SMSC has implementation to handle multiple bindings from the same Ip and port.SMSC will calculate the total of messages from all the bindings.
Answer to Number :1
Multiple bindings are useful when ESME wants to send more number of messages.For example : If your ESME limit is 100 messages per second in an single TCP Connection and your requirement is 300 messages per second to your SMSC,then you can bind another two connections to the SMSC and send 100 messages per second for each connection,so you can achieve 300 messages per second from your ESME to your SMSC.
Answer to Number :2
2.Its based on your SMSC Implementation.But,In most of the cases,the remaining packets will be sent from other connections if available

Related

How to test a UDP server limit?

A server listening on a UDP port, many clients can connect to it, there are many groups of clients connected to it. In a group one client is sending message and the server needs to route the message to the rest in the group. Like this many groups could be running simultaneously. How can we test what is the maximum number of connections the server can handle without inducing a visible lag in the response time ?
Firstly, let me desrcibe your network topology again. There is a server and many clients, clients are divided into several groups. A client sends a message to the server, and then the server sends something to the other clients in that group.
If the topology is like what I describe above, is the connections limitation you want to reach about how many clients the server can send to at the same time? Or do you want to know how many clients can send to server at the same time?
The way to test these two different circumstances may be using multi-thread or go routine if you can write by go. But they need to set different judge to give out the limitation.

Will ICE negotiations between peers behind two symmetric NAT's result in requiring two TURN servers?

I read RFC6577 and RFC8445 but I feel like there is a bit of a disconnect between how TURN can be used versus how ICE actually utilizes the relay candidates.
The TURN RFC describes the use of one single TURN server to ferry data between a client and a peer. The transport address on the TURN server accepts data flow from a client via TURN messages, whereas the relayed transport address accepts data flow from peer(s) via UDP. This sounds great - one TURN server and bidirectional data flow.
However in reading about ICE, I feel like this never happens. Both caller and callee independently allocate on potentially two TURN servers, and then send their respective relayed transport addresses to each other. More like an I can be reached via this relayed transport address sort of thing. Connectivity checks then occur and thus, two TURN servers end up being used here where data only flows in one direction through the relayed transport address of each participants allocated TURN server.
Is this true?
From the TURN RFC, it says the following:
The client can arrange for the server to relay packets to and from
certain other hosts (called peers) and can control aspects of how the
relaying is done. The client does this by obtaining an IP address and
port on the server, called the relayed transport address. When a peer
sends a packet to the relayed transport address, the server relays the
packet to the client. When the client sends a data packet to the
server, the server relays it to the appropriate peer using the relayed
transport address as the source.
However, I can't see a scenario whereby through ICE negotiations, data would ever flow through the transport address from the client to the peer. Both the caller and the callee independently allocate on a TURN server and send relayed transport addresses to each other to be reached on.
Basically, TURN can do bidirectional data flow, but with ICE between two symmetric NAT's, it wont. Is this correct?
Its a bit complicated.... bear with me. Reading just the TURN RFC isn't enough, you need context from RFC 5245 on ICE too.
The following scenario is the baseline case:
client A allocates a relay address 8.8.8.8:43739, sends it to client B
client B sends a UDP packet to 8.8.8.8:43739
TURN servers wraps the packet in a stun message, sends it to client A
Now as you say, typically client B will also allocate its own relay address and send it to A. Why isn't that used all the time (or half the time)? The priorities of the candidates are equal after all.
However the candidate pair priority which determines which pair to pick includes a factor which acts as a tie-breaker:
pair priority = 2^32*MIN(G,D) + 2*MAX(G,D) + (G>D?1:0)
Where G>D?1:0 is an expression whose value is 1 if G is greater than
D, and 0 otherwise.
This means the pair where the callers (assuming its the controlling agent) relay address is used has a higher priority than the pair with the callees relay address.
Additionally, there is another candidate in the game here for the port client B uses to send to port 8.8.8.8:43739. This will typically be from one of the local candidates and the TURN server sees (and puts into the data indication) the public (post-nat) ip of client B. On client A this will show up as a remote srflx candidate -- which has a higher priority than a relayed candidate and will therefore be used.
Now if B is behind a symmetric NAT (I think) the TURN server will see a different port from client B than anything for which client A has added a permission. This will typically mean the TURN server will drop the packet and that pair won't work.
If client A is not behind a symmetric NAT, the baseline process will be repeated in the other direction. Slightly less priority but its the same in terms of latency so users won't notice.
If both clients are (and now we are finally at the case you're asking about) are behind symmetric NAT, neither will work and a relay-relay pair will be used. This is fairly rare (<1% probably) and the latency impact is typically insignificant even when both clients are on different TURN servers.

WCF MaxPendingChannels setting vs MaxConnections

what is the relation between those properties? which one of them govern the number of clients connect to net.tcp reliable service?
I tried to read on both of them but tit is not clear which control what in the throtlling of the service
Hope this is helpful.
MaxPendingChannels has to do with the number do number of clients that can connect to a service via reliable session.
When the sender creates a reliable session channel to a receiver, a handshake between them establishes a reliable session. After the reliable session is established, the channel is put in a pending channel queue for acceptance by the service. The MaxPendingChannels property indicates how many channels can be in this state.
MaxConnection behavior depends if it is set on the client or the server: On the client, it's a limit on the connections that are pooled, and on the server it's a limit on connections that haven't been accepted yet by the ServiceModel layer, ref
In my opinion, this property describes the same thing, the number of channels that clients can connect to at the same time. With one difference, the default Concurrencymode for the WCF service is concurrencymode.single, which limits the number of connections a customer can make. In this mode, maxconnection represents the maximum number of connections allowed to be pending dispatch on the server, and maxpendingchannels refers to the number of connections for a reliable session.

Number of packets in broadcast/multicast

When a host sends out a broadcast, how does it calculate the number of packets (same) that it needs to send out so that all the other hosts on the same LAN would receive it? For example, when a host boots up, it sends a DHCP broadcast to all the other hosts in the LAN. How can it determine the number of packets (same) to send?
Ok, double checked with wikipedia. You mention both "broadcast/multicast" in your title, but they're significantly different from each other.
There is no calculation for broadcast. The answer is, you don't know, or care, how many other hosts are out there. You send a single packet to the broadcast address, and it's the responsibility of every host listening to act on the packet that was sent. On a Class C subnet, such as a 192.168.x.x, the broadcast address is 192.168.x.255.
With multicast, the originating host still only needs to send out one packet, so again, no calculation of total packets is needed. From wikipedia:
Multicast uses network infrastructure efficiently by requiring the
source to send a packet only once, even if it needs to be delivered to
a large number of receivers. The nodes in the network take care of
replicating the packet to reach multiple receivers only when
necessary.

multiple UDP ports

I have situation where I have to handle multiple live UDP streams in the server.
I have two options (as I think)
Single Socket :
1) Listen at single port on the server and receive the data from all clients on the same port and create threads for each client to process the data till the client stop sending.
Here only one port is used to receive the data and number of threads used to process the data.
Multiple Sockets :
2) Client will request open port from the server to send the data and the application will send the open port to the client and opens a new thread listening at the port to receive and process the data.Here for each client will have unique port to send the data.
I already implemented a way to know which packet is coming from which client in UDP.
I have 1000+ clients and 60KB data per second I am receiving.
Is there any performance issues using the above methods
or Is here any efficient way to handle this type of task in C ?
Thanks,
Raghu
With that many clients, having one thread per client is very inefficient since lots and lots of context switches must be performed.
Also, the number of ports you can open per IP is limited (port is a 16 bit number).
Therefore "Single Socket" will be far more efficient. But you can also use "Multipe Sokets" with just a single thread using the asynchronous API. If you can identify the client using the package's payload, then there is no need to have a port per client.