How can I test an internet connection during a certain period of time? - testing

I would like to test the internet connection of some place. Therefore I want to write my own script which is triggered every some random minutes to do some stuff.
I was thinking about:
Getting the latency until I ping some big site.
Downloading a big file from somewhere. (Bandwitdh)
But I think this is not reliable as probably the file that I try to get is cached in some provider' server.
Are there more checks necessary to measure the quality of the internet connection (Availability, Latency and Bandwith)?
How can I avoid having my requests cached?

Related

Pooling, Client Checked out, idleTimeoutMillis

This is my understanding after reading the Documents:
Pooling, like many other DBs, we have only a number of allowed connections, so you guys all line-up and wait for a free connection returned to the pool. (a connection is like a token in a sense)
at any given time, number of active and/or available connections is controlled in the range of 0-max.
idleTimeoutMillis said is "milliseconds a client must sit idle in the pool and not be checked out before it is disconnected from the backend and discarded." Not clear on this. Generally supposed when a client say a web app has done its CRUD but not return the connection voluntarily believed is idle. node-postgres will start the clock, once reaches the number of milliseconds will take the connection back to the pool for next client. So what is not be checked out before it is disconnected from the backend and discarded?
Say idleTimeoutMillis: 100, does it mean this connection will be literally disconnected (log-out) after idle for 100 millisecond? If yes then it's not returning to the pool and will result in frequent login connection as the doc said below:
Connecting a new client to the PostgreSQL server requires a handshake
which can take 20-30 milliseconds. During this time passwords are
negotiated, SSL may be established, and configuration information is
shared with the client & server. Incurring this cost every time we
want to execute a query would substantially slow down our application.
Thanks in advance for the stupid questions.
Sorry this question was not answered for so long but I recently came across a bug which questioned my understanding of this library too.
Essentially when you're pooling you're saying to the library you can have a maximum of X connections to the Database simultaneously open. So every request that comes into a CRUD API for example will open a new connection and you will have a total of X requests possible as each request opens a new connection. Now that means as soon as a request comes in it 'checks out' a connection from the pool. This also means another request cannot use this connection as it is currently blocked by another request.
So in order to let's say 'reuse' the same connection when one request is done with that connection you have to release it and say it's ready to use again 'checking out'. Now when another request comes in it is able to use this connection and do the aforementioned query.
idleTimeoutMillis this variable to me is very confusing to me and took a while to get my head around. When there is an open connection to a DB which has been released or 'checked out' it is in an IDLE state, which means that anyone wanting to make a request can make a request with this connection as it is not being used. This variable says that when a connection is in an IDLE state how long do we wait until we can close this connection. For various things this may be used. Obviously having open DB connections requires memory and so forth so closing them might be beneficial. Also when autoscaling - let's say you been at max requests/second and and you're using all DB conns then this is useful to keep IDLE connections open for a bit. However if this is too long and you scale down then you can run into prolonged memory as each IDLE connection will require some memory space.
The benefit of this is when you have an open connection and just send a query with it you don't need to re-authenticate with the DB it's authenticated and ready to go.

jmeter Load Test Serevr down issues

I was used a load of 100 using ultimate thread group for execution in NON GUI Mode .
The Execution takes place around 5 mins. only . After that my test environment got shut down. I am not able to drill down the issues. What could be the reason for server downs. my environment supports for 500 users.
How do you know your environment supports 500 users?
100 threads don't necessarily map to 100 real users, you need to consider a lot of stuff while designing your test, in particular:
Real users don't hammer the server non-stop, they need some time to "think" between operations. So make sure you add Timers between requests and configure them to represent reasonable think times.
Real users use real browsers, real browsers download embedded resources (images, scripts, styles, fonts, etc) but they do it only once, on subsequent requests the resources are being returned from cache and no actual request is being made. Make sure to add HTTP Cache Manager to your Test Plan
You need to add the load gradually, this way you will be able to state what was amount of threads (virtual users) where response time start exceeding acceptable values or errors start occurring. Generate a HTML Reporting Dashboard, look into metrics and correlate them with the increasing load.
Make sure that your application under test has enough headroom to operate in terms of CPU, RAM, Disk space, etc. You can monitor these counters using JMeter PerfMon Plugin.
Check your application logs, most probably they will have some clue to the root cause of the failure. If you're familiar with the programming language your application is written in - using a profiler tool during the load test can tell you the full story regarding what's going on, what are the most resources consuming functions and objects, etc.

Server load is minimal but website responds poorly

I have VPS on hetzner. Server is located in Germany.
It has 256GB RAM, 6CPUs (12 threads).
I have a file which since yesterday, is requested about 30 times in one second. File has 2 Select, 2 Update, 2 Insert queries, so I assumed (not sure how this works) from this file server has about 180 requests per second. So right after this requests started, all the websites on the server just started loading poorly. I made this file run just one select query and than die. This didn't help. In WHM load is aboiut 0.02.
I've checked for error logs and there is no max_user_connection or any error there.
I have enabled slow query log and checked log file. there is nothing (I've tested it with select sleep(10) and this query was logged).
This is visit statistics, please bring your attention to may 30th:
Bandwidth stats for last 24 hours:
There are many errors like this in ssl_log (diff IPs of course):
188.121.206.150 - - [30/May/2018:19:50:03 +0200] "-" 408 - "-" "-"
I've been searching web a lot and couldn't find any solution. Could anyone at least tell what should I monitor or where. I have full access to anything there is possible inside the server. Any help is appreciated.
UPDATE 1
I have subdomain: banners.analyticson.com (access allowed for now) and there I have all the images and html5 files that are requested.
Take one image for example : https://banners.analyticson.com/img/suy8G1S6RU.jpg
It needs too much time to load. As I noticed, this sub domain has some issue.
Script, that I mentioned earlier (with 6 queries) just tries to get one of those banners to the user, so result of that script is to return one banner from banners.analyticson.com.
UPDATE 2
I've checked my script, it is fine. It takes less than 1 second to complete.
I've also checked Top command and there is a result. I'm not sure if $MEM value is fine.
You're going to have to narrow the problem down...
There are multiple potential issues.
First thing to eliminate would be the performance of your new script on a development laptop - I assume you're using PHP, so use the profiling tools to work out what is going on. If it's a database query, you'll see which one by looking at the profiler.
If your PHP script and database queries are fine, the next thing to look at: it sounds like you've hit some bottleneck resource on your infrastructure. In these cases, scripts that run fine as a single request start queueing for the bottleneck resource, and every new request adds to the queue until the whole server starts to crawl. This can be a bit of a puzzle - start with top and keep digging.
Next, I'd look at configuration of Apache to make sure everything is squeaky clean - Apache used to have a default to do a reverse DNS lookup for every request, which slows the server down rather impressively on production. You may also want to look at your SSL configuration - the error you report is linked to a load balancer issue.
If it's not as simple as memory, CPU etc., you're into more esoteric issues. You may need to ramp up a load testing rig so you can experiment without affecting the live site - typically, I do this on a machine as similar to live as possible, using Apache JMeter to generate load, and find the "inflection point". Typically, you see response times increase linearly with the number of concurrent requests, until you hit the bottleneck resource, at which point the response time increases rapidly. As a simple example, if you have 10 database connections available, response time should increase linearly up to 10 concurrent connections, and then become much larger from 11 up.
Knowing where the inflection point is and being able to recreate it allows you to use PHP profiling tools under load. This is a lot of work.
UPDATE
You're using php-cgi; this is easily the most inefficient way of running PHP scripts. Your server is barely breaking a sweat - CPU and memory basically idle. Here's a comparison for how to run PHP; consider changing to mod_php.

Web application hangs after multiple requests

The application is using Apache Server as a web server and Tomcat as an application server.
Operations/requests can be triggered from the UI, which can take time to return from the server as it does some processing like fetching data from the database and performing calculations on that data. This time depends on the amount of data in the database and the duration of data it is processing. It could be as long as 30min to an hour or 2 min's based on the parameters.
Apart from this, there are some other calls which fetche small amount of data from the database and return immediately.
Now when I have multiple, say 4 or 5 of these long heavy calls to the server, and they are currently running, when I make a call that is supposed to be smaller and return immediately, this call also hangs as it never reaches my controller.
I am unable to find a way to debug this issue or find a resolution. Please let me know if in case you happen to know how to proceed with this issue.
I am using Spring, with c3p0 connection pooling with Hibernate.
So I figured out what was wrong with the application, and thought about sharing it in case someone somewhere faces the same issue. It turns out nothing was wrong with the application server or the web server, when technically speaking it was the browsers fault.
I found out that the browser can only have a limited number of open concurrent calls to a domain. In the case of the latest version of chrome at the time of writing is 6. This is something all the browsers do to prevent DDOS attacks.
As in my application, the HTTP calls take a lot of time to return until the calculations are completed several HTTP calls accumulate concurrently and as a result, the browser stops sending any further calls after the 6th concurrent call and it feels like the application is unresponsive. You can read about the maximum no of concurrent calls by a browser in SO.
A possible solution I have thought is either polling or even better Long Polling. I would have used WebSockets but then we would need to make a lot of changes.

Apache crash when sending mass email through third party

I have a LAMP stack setup on Digital Ocean (Ubunu 12.04) that is pretty stable. The only time we have had a crash is when we sent out a mass email to about 30,000 people. We are not using the server to send the message, but a third-party email service (iContact). I watch the server with Top and can see it fill up with apache entries (each taking about 20MB) for a short while then drop back down after the mail has finished being sent.
I have successfully adjusted the apache settings to no longer crash - it just slows down for a bit. These are not hits to the pages, but something is making apache ramp up and spin off a ton of workers during the email send process.
My question is, where do I look to get some idea of what is happening? Unfortunately iContact has been no help and the log files I've looked at aren't telling me much, so I think I'm likely looking in the wrong place.
I used to send emails to over 200,000 people directly from a single machine. Trying to do it from a webpage is pretty crazy, so I wrote a command-line based script to first write it into a database, and then send ~50 at a time from the database, deleting as it went.
With Symfony/Swiftmailer it is pretty easy these days - the sending part is just a shell script that keeps running 'app/console swiftmailer:spool:send', sleeping and restarting till the database is empty.