We are setting up a new server with some different configs and software versions of previous installations we have. This server will be connected to our haproxy server, but I want to limit/reduce the flow of traffic to the new server. Is it possible to add a weight value to a server in haproxy to accomplish this, or some other config setting to achieve the same goal?
Yes it does have weighting.
server smtp01 smtp01.example.com weight 45 check inter 15000
server smtp02 smtp02.example.com weight 45 check inter 15000
server smtp03 smtp03.example.com weight 10 check inter 15000
The example is for tcp proxy. That means it will send 10 smtp connections out of 100 when using a roundrubin balance. And before asking such a question, you should google it.
Related
Is there a way to specify the webrtc ports used that Ant Media Server uses?
Yeah there is a way for doing that.
Stop the server sudo service antmedia stop
Open your application's properties file webapps/WebRTCAppEE/WEB-INF/red5-web.properties
Add min port value with settings.webrtc.portRangeMin=50000
Add max port value with settings.webrtc.portRangeMax=51000
Save the file and start the server sudo service antmedia start
In this example, Ant Media Server uses port between 50000 and 51000 for webrtc connections. Please pay attention that, it also limits the concurrent number of publishers and players. So assume that it uses 2 ports for each webrtc connections. Having a 1000 ports limits you for 500 connections.
Maybe the title is not as descriptive as the question, but hopefully someone can help me with this.
I am setting up a new SOA for hosting our website (E-commerce platform). I have 3 servers available for this (bare-metals at the moment) and wish to balance the load over the 3 servers. Each server contains a full stack of the application and API's are connected across servers via a local network (10.x.x.x).
Each server has a couple of public IP-addresses to accept requests on.
What i would like to see is that server 1 accepts all requests and balances them through to the other servers depending on the load. This part I've already setup using HA-Proxy instances. The part i'm missing is what should be done when server 1 fails and every connection should be redirected through Server 2 or 3.
I was looking at VRRP (keepaliveD) but i'm only seeing setups of 2 servers, not 3 servers or more.
Does anyone have a suggestion for my kind of setup?
I have several Asterisk boxes and 2 kamailio servers (both for failover) loadbalancing calls between Asterisk boxes. Kamailio server receives calls from E1 to SIP gateways, and then forwards the call to the Asterisk cluster. There is no NAT, and the platform only processes inbound calls.
At this point, load balancing for Asterisk servers is OK: Asterisk cluster handles several thousand simultaneous calls without any problem, and if i want to have more calls, i "just" need to setup a new Asterisk server and set its IP address to Kamailio's dispatcher.
Regarding Kamailio, the failover cluster (if we can call it cluster, as there are only 2 serves) works perfectly.
But as any high-tech solution, there are limits: we cannot grow the Asterisk cluster indefinitely, so at some point, we'll need to add more Kamailio servers.
Knowing that the E1-to-SIP gateway redirect calls to only 1 IP address (the kamailio cluster address), question is:
How can we add any number of new Kamailio servers to thep platform, and load-balancing SIP requests between kamailio cluster?
"grosso-modo", how to load balance load balancers? :)
I thought about Kamailio + LVS integration. Any clues, anyone?
You have following choices
1) "root" kamailio with 301 redirect setup, which just redirect inbound calls to set of kamailio
2) dns which return always different ip. Clients have return dns
3) http://www.lartc.org/autoloadbalance.html
4) cisco router or iptables setup similar to lartc(just forward port random order to different ips)
But please note following: if you have so large load that single!! kamailio server can't do that - you dooing something wrong or you need hire expert at this stage.
Single kamailio server can easy serve upto 7000 calls per second.
I want to optimize the configuration of php5-fpm with apache. On my server I have one site with 4 subdomain all equally important. I configure pool for each subdomain. My server 8Gb of ram and 6vcores. The website will generate a lot of traffic and I would like to optimize at best. On the server running a database mariadb, logwatch, fail2ban, rkhunter, postfix, webmin but nothing more. The server is dedicated to a single site
At the moment in every pool I put:
pm = dynamic
pm.max_children = 150
pm.start_servers = 30
pm.min_spare_servers = 15
pm.max_spare_servers = 40
Can you tell me if it is necessary to add or increase parameter things?
In addition to this I added an APC cache. If you need more information ask me.
Thanks
I'm writing some code on a mobile device that uses a REST service to retrieve data from a host. That REST services is being proxied by Apache. In test mode I would like to be able to simulate network outages (as if the device has lost it's cell connection) to test the applications handling of intermittent failures. I also need to validate it's behavior with slow network connections.
I'm currently using Traffic Shaper XP to slow the network connection, but now I need something to make the Apache server send connection resets both randomly and on predefined sequences (to setup and repeat specific test scenarios).
I highly recommend https://github.com/Shopify/toxiproxy from Shopify:
Download https://github.com/Shopify/toxiproxy/releases the cli and server
Run the server:
./toxiproxy-server-linux-amd64
On the cli setup proxy to apache on another port e.g. 8080
./toxiproxy-cli create apache -l localhost:8080 -u localhost:80
Make connection slow and unreliable:
./toxiproxy-cli toxic add apache -t latency -a latency=3000
./toxiproxy-cli toxic add apache -t limit_data -a bytes=1000 --tox=0.01
here add 3 second of latency and stop after 1000 bytes for 1% of requests there are other options for bandwidth etc. You can add or remove these during use. Lots of other features and libraries there.
In Apache2 you can make it slow by adjust prefork settings in apache2.conf. The settings below ought to make apache pretty fn slow. They made my local web application take 700% longer to load.
<IfModule mpm_prefork_module>
StartServers 2
MinSpareServers 2
MaxSpareServers 2
MaxClients 4
MaxRequestsPerChild 0
</IfModule>
It looks like DummyNet is the closest thing, but it’s still not quite there. For repeatable testing it would be good to have some control over dropped packets and resets.
Write a little proxy that forwards TCP connections from your app to the apache server and that you can set up in your test to cut the connection after x number of bytes or milliseconds.
On a different (or on the same) computer use the commandline tool ab to get some load on the apache. More informations here.
Is this a Unix or Linux environment? nice it up to give it lower priority then run a high CPU usage task like listening to music, playing a movie, calculating pi, etc. The low priority for Apache should create problems similar to what you're looking for.