I have controller and its topology created using mininet. I need to generate traffic among hosts of topology via iperf, so that the controller is loaded and it can not handle. Is there command to generate huge number of packets at a time or large amount of traffic generation which possible by iperf?
What does this mean "the controller is loaded and it can not handle"? Do you mean find a way to generate huge traffic to saturate the CPU? network bandwidth? Or other resource?
Just finding this tool is also useful to generate multiple threads traffic on Linux: https://github.com/Microsoft/ntttcp-for-linux
Related
I have a simple Tomcat API and my goal is to manage the higher number of req /sec.
My problem is the following :
Scenario 1: When the client is using some persistent connections I manage to reach around 20000 req/sec using a single instance of the API. The server is loaded and the CPU of the server is almost fully used.
Scenario 2: When the client is closing connections after each request, the API only manages 600 req/sec and the server resources are not used at all. So I guess there is a bottleneck either on the global number of connections, either on the number of connections the server is able to manage per second.
What I want to know is if there is a configuration (on tomcat or on the server) that I can change to improve performance during scenario 2.
If not, which kind of resources is limiting? Can I address the problem by deploying many 1 CPU servers?
What I have looked to for the moment :
The number of thread and connection in Tomcat config :
I have adjusted theses number from default to 200 threads and 2000 connections, I don't see any effect during scenario2.
Ulimit is set to unlimited
JVM is configured as follow : JAVA_OPTS: -Xmx8g
It was better if you provide more information about your deployment but generally there are some works that can help you to achieve better performance.
First of all you should measure the cost of each request and optimize it as much as you can. For example, if your API with each request, execute a query on the local database and this query is consuming a lot of CPU usages, you should optimize your query.By doing this your server can tolerate more request before its cpu becomes 100%.
Note some tools like JProbe can help you for optimizing your API.
secondly, monitor your resources during the test and find which one of them becomes fully used. You should check Network Connection, Disk, Memory and CPU loads during the test and identify weakness of your resources. Track thread blocks and deadlocks as they are important to performance.
You can scale-up your server resources based on this information or decide to implement distributed architecture or add a load-balancer to your solution or add a caching strategy for you project.
In your Tomcat configuration there some settings which can improve your performance such as :
Configuring connectors
set maxThreads to a high enough value
set acceptCount to a high enough value
Configuring cache
set cacheMaxSize attribute to the appropriate value.
Configuring content compression
turning content compression on and using GZIP compression
Has anyone else seen performance issues with running Redis in a Docker container environment?
Here's what I've noticed...
Setup A: Local machine, traditional Redis install
Setup B: Local machine, using canonical Redis image https://registry.hub.docker.com/_/redis/
I've got an identical HTTP server on my local machine that fires as fast as the request/response cycle will allow.
Observations:
- A can sustain approximately 2X the throughput of B.
- B performs identical to A when you benchmark (from within the container)
So, this leads me to believe that B is slower than A because of a networking issue: i.e. the networking relays introduced by running software in a virtualized environment are creating significant performance issues...
Just wondering if anyone else has noticed anything like this?
Docker's default networking option, --net=bridge introduces overhead due to NAT packet rewriting, noticeable with high packet rates.
Network performance can be improved by using --net=host, instructing Docker to not create a separate network stack for the container, allowing full access to the host network interfaces.
This option should be used carefully though, as it lets container processes open low-numbered ports like any other root process, and access local network services like D-bus, which can lead to processes in the container being able to do unexpected things.
In short: If you know what you are running inside the container it is safe. If you suspect unwanted or aggressive behavior - do not do it.
Can someone explain to me how high-availability ("HA") works for a web application ... because I assume HA means that there exist no single-point-of-failure.
However, even if a load balancer is used- isn't that the single point of failure?
I have found this article on the subject:
http://www.tenereillo.com/GSLBPageOfShame.htm
Basically if you do not require long lasting sticky sessions you can configure your DNS servers to return multiple A records (IP addresses) for your website.
Web browsers are smart enough to try all the addresses until they find one that works.
In simple words high availability can be defined as running a system 24*7 without a downtime even if there are hardware and software failures. In other way a fault tolerance application. This helps ensure uninterrupted use of the application for it’s intended users.
Read more on High Availability Deployment Architecture
It works the following way that you setup two HA Proxy servers with heartbeat, so when one fails (stops responding to queries), it's being removed from the cluster.
Requests from HA Proxy can be forwarded to web servers in round robin fashion, and if one web server fails, HA Proxy servers do not try to contact it until it's alive.
Web servers are storing all dynamic information in database, which is replicated across two MySQL instances.
As you can see, HA Proxy and Cluster MySQL (or simply MySQL replication) as well IP Clustering here is the key.
Sure it is when operated alone. Usual highly available setup includes 2 or more load balancers running in cluster in either active/active or active/passive configuration. To further increase the availability you can have 2 different Internet Service Providers (or geo distributed datacenters) each running a pair of clustered load balancers. Then you configure DNS A record resolving to 2 distinct public IP addresses which guarantees round-robin processing splitting DNS requests evenly (CloudFlare is very fast and reliable at this). There's also possibility to return IP address of datacenter closest to your originating geo location by using something like PowerDNS dnsdist
This is what big players do to make their services highly available.
Please read https://docs.oracle.com/cd/E23824_01/html/821-1453/gkkky.html for more clearity. Actually both load balancer uses same vip(Virtual IP Address. https://techterms.com/definition/vip).
HA architecture is a entire field and multiple books were written on it, so it is hard to answer in a short paragraph.
To sum up the ideal situation, you would be using multiple servers, interconnected to a layer of multiple load balancers. The nodes and LB will be located in a few different data centers, and connected to different network backbone. Ideally the data centers will be located all over the world.
In short, all component will have redundancy, including the load balancers.
For a starting point, see Wikipedia for High Availability Cluster
I've written a simple server application which will run distributed on several machines.
My question is how does a network load balancer works, in general?
I've heard of round-robin and other algorithms, but what I haven't got answer to is how does the process really goes? In socket terms.
The client connects to one of the load balancer machines, asks for a "free-to-connect-to" server and simply connects to it?
That's the simpliest way I can think of.
.. or, does it use the load balancer as a proxy (that implies that all the NBs must be always connected to the application servers, and data is transferred through them)?
It's more of a general question. How would you do this?
Thank you all!
There are several different ways to load balance an application. Some are physical devices that sit between your router and the servers; some are software based with a bit of code that runs on each of the load balanced devices.
Microsoft has load balancing built into Windows which is all software based. It's pretty good and easy to set up.
However, I'll cover the physical route.
There are several algorithms here, but the main one is Round Robin with an option for "sticky" sessions. Sticky in this case means that the load balancer will try to keep a history of clients and forward requests from the same client to the same machine. This means the load balancer needs to keep a list of clients and where it directed those clients. Depending on cache size, clients may fall off the list and on future requests they may be forwarded to a different server.
Round Robin is a pretty simple idea. For each request that comes in send it to the next server in the list. More complicated algorithms might take into account how many requests go to a particular server and how long are those requests taking; then try to rebalance new requests to favor faster servers. This part is complicated though.
How is Apache in respect to handling the c10k problem under normal conditions ?
Say while running very small scripts with little data, or do I need to scale out if I use Apache?
In the background heavy lifting is done by a few servers running specialized software that processes the requests but I'd like to use Apache as a front. Is this a viable plan?
I consider Apache to be more of an origin server - running something like mod_php or mod_perl to generate the content and being smart about routing to the appropriate system.
If you are getting thousands of concurrent hits to the front of your site, with a mix of types of data (static and dynamic) being returned, you may find it useful to put a more optimised system in front of it though.
The classic post-optimisation problem with Apache isn't generating the dynamic content (or at least, that can be optimised for early in the process), but simply waiting for a slow client to be able to receive the bytes that are being sent. It can therefore be a significant advantage to put a reverse proxy, in the form of Squid or Nginx, in front of the servers to take over the 'spoon-feeding' of the slow network clients, while allowing the content production to happen at full speed, and at local network speeds - 100Mb/sec or even gigabit speeds - if it even has to traverse a network at all.
I'm assuming you've probably seen this data, but if not, it might give you some idea.
Guys, imagine that you are running web server with 10K connections (simultaneous). How could it be?
You've got many many connections per second
Dynamic content
Are you sure that your CPU can handle that many PHP sessions for example? I guess no, so why are you thinking about C10K problem? :D
Static content - small files
And still soo many connections? On single server? Probably you've got problems with networking/throughput too or you are future competitor of Google. Use lighttpd which addresses C10K problem and is stable - fly light. Using Apache for only static files for large sites is obvious.
Your clients are downloading large files for a large time - static content
ISO images, archives etc
If you are doing it via web server - FTP may be more appropriate.
Video streaming
Use lighttpd or specialized software. And still... What about other resources?
I am using Linux Virtual Server as load balancer in front of apache servers (with specific patches for LVS-NAT) and I am happy :) This string is an answer you want to hear.