Roaming Extensions with Asterisk - load-balancing

I've done some preliminary searches but I have not found anything about how roaming extensions could be done.
Basically I'd like the ability to have Asterisk servers geographically distributed and/or have multiple servers in a single location and round robin user connections to them appropriately.
If two servers both have an extension '1000', and that user connects to Server A, then a user on Server B tries calling, is there a way for server B to realise that '1000' isn't connected to them and try the other server?
I'm looking at possibilities for both fault tolerance and minimising ping times when people are all around the country. It seems like it would have lots of applications.
I'm not interested in a active-passive cluster, nor actually assigning extensions to specific servers geographically. Ideally any extension should be able to connect to any server and call any other extension.

1) followme(if extension not present, dial other server)
2) DUNDI protocol. It designed to track such users.

Related

How to test a UDP server limit?

A server listening on a UDP port, many clients can connect to it, there are many groups of clients connected to it. In a group one client is sending message and the server needs to route the message to the rest in the group. Like this many groups could be running simultaneously. How can we test what is the maximum number of connections the server can handle without inducing a visible lag in the response time ?
Firstly, let me desrcibe your network topology again. There is a server and many clients, clients are divided into several groups. A client sends a message to the server, and then the server sends something to the other clients in that group.
If the topology is like what I describe above, is the connections limitation you want to reach about how many clients the server can send to at the same time? Or do you want to know how many clients can send to server at the same time?
The way to test these two different circumstances may be using multi-thread or go routine if you can write by go. But they need to set different judge to give out the limitation.

Does WebRTC allow one-to-many (multicast) connections?

I've read a lot about WebRTC, but there's one question that still remains. I hope you can help me with that:
Does WebRTC allow me to create a one-to-many connection? I don't mean "being able to have multiple connections to different computers", I really talk about having one connection that multicasts its data to multiple endpoints without the need to "upload" the data once for each endpoint. Will it be possible to send one single package to the web, that, when it reaches the web, magically splits itself into multiple packages with different targets?
I hope you get what I'm looking for :)
Until now, I've only seen one-to-one connections, or solutions that have one connection to a central server that does the multicast for them (which usually results in twice the ping).
But to me, one-to-one connections don't seem to be really useful (due to low upload-bandwith of clients), and solutions with a central server are also possible without WebRTC (using WebSockets), so the only real use case for WebRTC would be one-to-many connections.
So.. is this something that will be possible in the future? Or is it already possible today?
Three things:
IP multicast in the Internet is not possible at the moment (multicast addresses are not routed by ISPs)
WebRTC fits many use cases beyond one-to-many communication, just have a look at this document: https://datatracker.ietf.org/doc/html/draft-ietf-rtcweb-use-cases-and-requirements-06
WebRTC connections between browsers are always encrypted (using SRTP for A/V data and DTLS for generic data) and the encryption parameters (session keys etc.) are negotiated for every connection separately. How would you do that in a multicast environment (think of it as a distribution tree)?
So no, WebRTC cannot be used with IP multicast.
I would answer "It doesn't for now", because as a programmer, I can tell you, that there are number of ways browser devs to make it work if we (users) insist on it's importance. But how ? Since there's encryption, they could allow sharing of the session's encryption keys to the group of 'registered' (multicast) users. But how ? Well, Web was created for sharing. The most obvious way is through web server mediation and JS WebRTC API function (to load the user keys). Since multicast is most often used for efficient video distribution, you have a RTP/SRTP video server. The web server can coexist at the same machine. If they decide to extend it to web browsers - then just the "server" role can be done by the Web browser who created the multicast stream (the sender). The clients need to know who is it.
Again: In December 2013, this is still not possible. And multicasts are allowed on the Internet only in:
some experimental WAN nets
some internet+video ISP nets
LANs (when enabled at switch level, cheap switches transmit it to all ports). But you can be an ISP, researcher or LAN user, so it's necessary.

LDAP Server side sorting - really a good idea?

I'm toying with using server side sorting in my OpenLDAP server. However as I also get to write the client code I can see that all it buys me is in this case one line of sorting code at the client. And as the client is one of presently 4, soon to be 16 Tomcats, maybe hundreds if the usage balloons, sorting at the client actually makes more sense to me. I'm wondering whether SSS is really considered much of an idea. My search results in the case aren't larger, dozens rather than hundreds. Just wondering whether it might be more of a weapon than a tool.
In OpenLDAP it is bundled with VLV - Virtual List View, which I will need some day, so it is already installed: so it's really a programming question, not just a configuration question, hence SO not SF.
Server-side sorting is intended for use by clients that are unable or unwilling to sort results themselves; this might be useful in hand-held clients with limited memory and CPU mojo.
The advantages of server-side sorting include, but not limited to:
the server can enforce a time limit on the processing of the sorting
clients can specify an ordering rule for the server to use
professional-quality servers can be configured to reject requests with sort controls attached if the client connection is not secure
the server can enforce resource limits, for example, the aforementioned time limit, or administration limits
the server can enforce access restrictions on the attributes and on the sort request control itself; this may not be that effective if the client can retrieve the attributes anyway
the server may indicate it is too busy to perform the sort or simply unwilling to perform the sort
professional-quality servers can be configured to reject search requests for all clients except for clients with the necessary mojo (privilege, bind DN, IP address, or whatever)
The disadvantages include, but not limited to:
servers can be overwhelmed by sorting large result sets from multiple clients if the server software is unable to cap the number of sorts to process simultaneously
client-side APIs have to support the server-side sort request control and response
it might be easier to configure clients to sort by their own 'ordering rules'; although these can be added to professional-quality, extensible servers
To answer my own question, and not to detract from Terry's answer, use of the Virtual List View requires a Server Side Sort control.

Dynamic server discovery list

I'd like to create a web service that an application server can contact to add itself to a list of servers implementing the application. Clients could then contact the service to get a list of servers. Something similar to how minecraft's heartbeats work for adding your server to the main server list.
I could implement it myself pretty easily, but I'm hoping someone has already created something like this.
Advanced features would be useful. Things like:
Allowing a client to perform queries on application-specific properties like the number of users currently connected to the server
Distributing the server list across more than one machine
Timing out a server's entry in the list if it hasn't sent a heartbeat within some amount of time
Does anyone know of a service like this? I know there are open protocols and servers for doing local-LAN service discovery, but this would be a WAN service.
The protocols I could find that had any relevance to your intended application are these:
XRDS (eXtensible Resource Descriptor Sequence).
XMPP Service Discovery protocol.
The XRDS documentation is obtuse, but you may be able to push service descriptions in XML format. The service type specification might be generic, but I get a headache from trying to decipher committee-speak.
The XMPP Service Discovery protocol (part of the protocol Formerly Known As Jabber) also looked promising, but it seems that even though you could push your service description, they expect it to be one of the services mentioned on this list. Extending it would make it nonstandard.
Finally, I found something called seap (SErvice Announcement Protocol). It's old, it's rickety, the source may be propriety, it's written in C and Perl, it's a kludge, but it seems to do what you want, kind-of.
It seems like pushing a service announcement pulse is such an application-specific and trivial problem, that almost nobody has considered solving the general case.
My advice? Read the protocols and sources mentioned above for inspiration (I'd start with seap), and then write, implement, and publish a generic (probably xml-based) protocol yourself. All the existing ones seem to be either application-specific, incomprehensible, or a kludge.
Basically, you can write it yourself though I am not aware if anyone has one for public (I wrote one over 10 yrs ago, but for a company).
database (TableCols: auto-counter, svr_name, svr_ip, check_in_time, any-other-data)
code to receive heartbeat (http://<you-app.com>?svr_name=XYZ&svr_ip=P.Q.R.S)
code to list out servers within certain check_in_time
code to do some housecleaning once a while (eg: purge old records)
To send a heartbeat out, you only need to send a http:// call, on Linux use wget* with crontab, on windows use wget.exe with task scheduler.
It is application specific, so even if you wrote one yourself, others can't use it without modifying the source code.

using BOSH/similar technique for existing application/system

We've an existing system which connects to the the back end via http (apache/ssl) and polls the server for new messages, needless to say we have scalability issues.
I'm researching on removing this polling and have come across BOSH/XMPP but I'm not sure how we should take the BOSH technique (using long lived http connection).
I've seen there are few libraries available but the entire thing seems bloaty since we do not need buddy lists etc and simply want to notify the clients of available messages.
The client is written in C/C++ and works across most OS so that is an important factor. The server is in Java.
does bosh result in huge number of httpd processes? since it has to keep all the clients connected, what would be the limit on that. we are also planning to move to 64 bit JVM/apache what would be the max limit of clients in that case.
any hints?
I would note that BOSH is separate from XMPP, so there's no "buddy lists" involved. XMPP-over-BOSH is what you're thinking of there.
Take a look at collecta.com and associated blog posts (probably by Jack Moffitt) about how they use BOSH (and also XMPP) to deliver real-time information to large numbers of users.
As for the scaling issues with Apache, I don't know — presumably each connection is using few resources, so you can increase the number of connections per Apache process. But you could also check out some of the connection manager technologies (like punjab) mentioned on the BOSH page above.