I am writing multiple AGIs using Perl that will be called from the Asterisk dialplan. I expect to receive numerous similtaneous calls so I need a way to load balance them. I have been advised to use FastAGI instead of AGI. The problem is that my AGIs will be distributed over many servers not just one, and I need that my entry point Asterisk dispatches the calls among those servers (where the agis reside) based on their availability. So, I thought of providing the FastAGI application with multiple IP addresses instead of one. Is it possible?
Any TCP reverse proxy would do the trick. HAProxy being one and nginx with the TCP module being another one.
A while back, I've crafted my own FastAGI proxy using node.js (nodast) to address this very specific problem and a bit more, including the ability to run FastAGI protocol over SSL and route requests based on AGI request location and parameters (such as $dnis, $channel, $language, ...)
Moreover, as the proxy configuration is basically javascript, you could actually load balance in really interesting ways.
A sample config would look as follow:
var config = {
listen : 9090,
upstreams : {
test : 'localhost:4573',
foobar : 'foobar.com:4573'
},
routes : {
'agi://(.*):([0-9]*)/(.*)' : function() {
if (this.$callerid === 'unknown') {
return ('agi://foobar/script/' + this.$3);
} else {
return ('agi://foobar/script/' + this.$3 + '?callerid' + this.$callerid);
}
},
'.*' : function() {
return ('agi://test/');
},
'agi://192.168.129.170:9090/' : 'agi://test/'
}
};
exports.config = config;
I have a large IVR implementation using FastAGI (24 E1's all doing FastAGI calls, peaks at about 80% so that's nearly 600 Asterisk channels calling FastAGI). I didn't find an easy way to do load balancing, but in my case there are different FastAGI calls: one at the beginning of the call to validate the user in a database, then a different one to check the user's balance or their most recent transactions, and another one to perform a transacion.
So what I did was send all the validation and simple queries to one application on one server and all the transaction calls to a different application on a different server.
A crude way to do load balancing if you have a lot of incoming calls on zaptel/dahdi channels would be to use different groups for the channels. For example suppose you have 2 FastAGI servers, and 4 E1's receiving calls. You can set up 2 E1's in group g1 and the other 2 E1's in group g2. Then you declare global variables like this:
[globals]
serverg1=ip_of_server1
serverg2=ip_of_server2
Then on your dialplan you call FastAGI like this:
AGI(agi://${server${CHANNEL(callgroup)}}/some_action)
On channels belonging to group g1, that will resolve to serverg1 which will resolve to ip_of_server1; on channels belonging to group g2, CHANNEL(callgroup) will resolve to g2 so you get ${serverg2} which resolves to ip_of_server2.
It's not the best solution because usually calls start coming in on one span and then another, etc so one server will get more work, but it's something.
To get real load balancing I guess we would have to write a FastAGI load balancing gateway, not a bad idea at all...
Mehhh... use the same constructs that would apply to load balancing something like web page requests.
One way is to round robin in DNS. So if you have vru1.example.com 10.0.1.100 and vru2.example.com 10.0.1.101 you put two entries in DNS like...
fastagi.example.com 10.0.1.100
fastagi.example.com 10.0.1.101
... then from the dial plan agi(agi://fastagi.example.com/youagi) should in theory alternate between 10.0.1.100 and 10.0.1.101. And you can add as many hosts as you need.
The other way to go is with something a bit too complicated to explain here but proxy tools like HAProxy should be able to route between multiple servers with the added benefit of being able to "take one out" of the mix for maintenance or do more advanced balancing like distribute equally based on current load.
Related
We need to use a single instace of VRS to support concurrent request.
We have a requirement where multiple different users should be able to create a route plan for different vehicles and locations same time. However, looking at VRS functionality, I am not able to understand how applications supports it. For demo, when I create a different route using different browser, it always merges first and second request and give one single result.
Just a little more elobration on the question:
We are aiming to convert requests as REST API endpoints which will be invoked by different uses same time for their usecase.
Eg. Request 1: Vehicle 1&2 with 50 locations. VRS can calculate route & give one message with all detailed calculations for request1.
Request 2: Vehicle 3 & 4 with 40 locations. So VRS can calculate route which later we can get as one message with all detailed calculations limited to request 2.
Both requests can be submitted same time & application should considered as separate requests without getting merged.
Is there a way to add request ID or any other paramaters to achive this?
For multi-tenant solving, the SolverManager API is ideal:
public class TimeTableService {
// tenantId is Long, but it can also be String or UUID
private SolverManager<TimeTable, Long> solverManager;
// Returns immediately, call it for every dataset
public void solveBatch(Long tenantId) {
solverManager.solve(tenantId,
// Called once, when solving starts
this::findById,
// Called once, when solving ends
this::save);
}
public TimeTable findById(Long tenantId) {...}
public void save(TimeTable timeTable) {...}
}
Ich have one assignment and I need a little help. I have infected.pcap and the following task:
Hardcoded IP addresses Sometimes, malware contains hardcoded IP addresses to download their payload or to communicate with their command and control (C&C) server. Find all such communication. Hint: Such IPs have no preceding DNS request.
I need to solve it with Bro script. This was my idea, but unfortunatelly all my connections have no DNS request:
#load base/protocols/dns/main.bro
event file_timeout(f: fa_file)
{
for ( cid in f$conns )
{
if(f$conns[cid]?$dns){
print f$conns[cid]$dns;
print "DNS";
}else {
print "No DNS";
}
}
}
Do you know maybe what is wrong with my code?
I would suggest that you're using the wrong event for this. The file_timeout only occurs if a file transfer was occurring and then stopped without completing. A much more interesting event correlation would be:
Track DNS address lookup responses (I would likely use event
dns_A_reply(c: connection, msg: dns_msg, ans: dns_answer, a:
addr)).
Record the addresses returned in a set; this will provide
you a set of all addresses that were discovered through a DNS query.
Examine outbound requests (where orig_h on the SYN is an internal
address)
Check to see if the address in id$resp_h is in the set of
addresses step 2. If it is, return, if it isn't,
generate a notice since you have an outbound connection attempt with
no corresponding DNS lookup.
My apache server goes down when a random client starts al lot of GET for same url. The problem is it happens with unpredictable url paths. With fail2ban i can ban a predetermined url but not prevent it for unknown url paths. Is there a way to resolve this?
Depending on your Web Server, you should be able to scan your web server logs for GET requests and ban people who make too many of them within a specific time period. You just need to be careful to avoid banning legitimate users, so the frequency of allowable GET requests is something to fine tune carefully.
Create a new Jail Filter: sudo nano /etc/fail2ban/filter.d/GETFlood.conf
Define the regex you need for identifying GET requests based on the logs for your Web server. With a standard Apache access.log, it would be: failregex = ^<HOST>.*\s"GET\s.*$
Add an entry to your /etc/fail2ban/jail.local:
[getflood]
enabled = true
action = iptables-allports[name=getflood]
filter = getflood
logpath = /var/log/apache2/*access.log
maxretry = 30
findtime = 90
bantime = 604800
Here, we let any individual IP Address make up to 30 GET requests every 90 seconds. Again, without more details about your server, you'll need to play around with these timings to avoid banning legitimate users.
When I do something like this, apps/openssl s_client -connect 10.102.113.3:443 -ssl3, client-server communication is created using openSSL.
Now, I want to send application data from the client to the server. For example, after doing apps/openssl s_client -connect 10.30.24.45:443 -ssl3, I get something like this:
...certificate and session details...
---
GET /path/to/file
The GET /path/to/file all goes in one SSL record. I want to send it in multiple records.
I assume I have to edit apps/s_client.c, and find the place where the SSL_write or similar happens.
How do I go about something like that?
For a properly designed application the TCP packet sizes and SSL frame sizes should not matter. But there are badly designed applications out there which expect to get like the HTTP request inside a single read, which often means that it must be inside the same SSL frame. If you want to run tests against applications to check for this kind of behavior you either have to patch your s_client application or you might use something else, like
#!/usr/bin/perl
use strict;
use IO::Socket::SSL;
my $sock = IO::Socket::SSL->new('www.example.com:443') or die "$!,$SSL_ERROR";
print $sock "GE";
print $sock "T / HT";
print $sock "TP/1.0\r\n\r\n";
This will send the HTTP request header within 3 SSL frames (which might still get put together into the same TCP packet). Since on lots of SSL stacks (like OpenSSL) one SSL_read reads only a single SSL frame this will result in 3 reads necessary to read the full HTTP request.
Okay, I figured out that I needed to change the number of bytes I'm writing using the SSL_write.
This is a code snippet, starting at line 1662 of s_client.c:
if (!ssl_pending && FD_ISSET(SSL_get_fd(con),&writefds))
{
k=SSL_write(con,&(cbuf[cbuf_off]), (unsigned int)cbuf_len);
.......
}
To make the application data be sent in multiple records instead of just the one, change the last parameter in the SSL_write.
For example, do this:
if (!ssl_pending && FD_ISSET(SSL_get_fd(con),&writefds))
{
k=SSL_write(con,&(cbuf[cbuf_off]), 1);
.......
}
This will result in something like this:
Notice the multiple records for Application Data instead of just the one.
How do I get the MongoDB's time or use it in a query from VB.NET?
For example, in the Mongo shell I would do:
db.Cookies.find({ expireOn: { $lt: new Date() } });
In PHP I can easily do something like this:
$model->expireOn = new MongoDate();
How do I approach this in VB.Net? I don't want to use the local machine's time. This obviously doesn't work...
MongoDB.Driver.Builders.Query.LT("expireOn", "new Date()")
If you merely want to remove expired cookies from your collection, you could use the TTL collection feature which will automatically remove expired entries using a background worker on the server, hence using the server's time:
db.Cookies.ensureIndex( { "expireOn": 1 }, { expireAfterSeconds: 0 } )
If you really need to query, use a service program that runs on the server or ensure your clocks are reasonably synchronized because clocks that are considerably off can cause a plethora of problems, especially for web servers and email servers. (Consider HTTP headers like Date, LastModified and If-Modified-Since, Email Timestamps, HMAC/timestamp validation against replay attacks, etc.).