Does cloudbees RUN#cloud bill by the hour or by the minute? - cloudbees

For example, if I am running an app and it scales out to 5 servers for just 10 minutes, do I pay for 5 servers for 10 minutes or 5 for the whole hour?

CloudBees measures your app multiple times per hour, and bills by the minutes consumed. This is also true of DEV#cloud (so the resolution of measurement is by the minute).

Related

How can i do Stress/Performance Testing in jmeter?

I Want to go for stress testing to start with the anticipated number of users (or just from 1 virtual user) and gradually increase the load such as for 10 threads, 20 threads, …. 100 threads until response time starts exceeding the acceptable value or errors start occurring.But For all this test run should i increases the Ramp-up Period(Seconds) or it will remain the same for all test?
Picture is given below:
Apparently, the Ramp-up time shouldn't be the same for all of your tests. You have to set the Ramp-up period accordingly.
Ramp up is the time in which all the users arrive on your tested application server.
You can check this thread also: How should I calculate Ramp-up time in Jmeter
As per JMeter documentation:
The ramp-up period tells JMeter how long to take to "ramp-up" to the full number of threads chosen. If 10 threads are used, and the ramp-up period is 100 seconds, then JMeter will take 100 seconds to get all 10 threads up and running. Each thread will start 10 (100/10) seconds after the previous thread was begun. If there are 30 threads and a ramp-up period of 120 seconds, then each successive thread will be delayed by 4 seconds.
Ramp-up needs to be long enough to avoid too large a work-load at the start of a test, and short enough that the last threads start running before the first ones finish (unless one wants that to happen).
Start with Ramp-up = number of threads and adjust up or down as needed.
So if you don't have a better idea - go for the ramp-up period in seconds equal to the number of users.
The point of ramp-up is increasing the load gradually so will be able to correlate increasing load with other performance metrics for websites like response time, throughput, number of server hits per second, number of errors per second, etc.
See JMeter Glossary for the metrics which are stored by JMeter explained

How good is Scheduling Messages with RabbitMQ (rabbitmq_delayed_message_exchange) for production usage

Currently, we are planning to generate threshold of 20K notifications per minute and add delay to fetch notification status at few intervals like 2 mins, 5 mins, 1 hour and then 1 day if the status isn't final.
I have done POC with
https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq/
it looks good but wanted to know real-time stats or any other suggestion before going live.

Weblogic session timeouts don't extend as expected

Using Weblogic and I've got my session timeout in my web.xml set to 30 minutes. Users get 30 minutes, but any activity after that only extends 5 minutes.
So for example on the 28th minute, the user will do some activity to the server. 7 minutes later the user is forced off as the session times out.
Expected behavior would be to get another 30 minutes, that the session timeout should be the time SINCE last activity. However it appears that Weblogic only increases your time in 5 minute increments.
Tomcat on the other hand (which Im not allowed to use) works as expected, you get a full 30 min from time of last activity.
Does anyone know of any internal Weblogic setting that might control how much time the user gets after the initial session timeout length?

complex crons for shared hosting

I have an app which displays the latest scores in football games. Every 15 minutes, a cron runs to check if a game has started. If it has, another cron needs to start which runs every 30 seconds for the next 2 hours (this cron queries an API to get latest incidents for a game). Im on shared hosting with Plesk and there is no ssh access. Plesk appears to just offer very simple cron management to schedule the execution of a script every x minutes. What is the best solution for me?
Have the 30 seconds cron running every 30 seconds (also in times where there is no game) and check inside the script (which is run by this cron) if you are inside your 2 hour timeframe - if yes do the work else end the script

What's the "average" requests per second for a production web application?

I have no frame of reference in terms of what's considered "fast"; I'd always wondered this but have never found a straight answer...
OpenStreetMap seems to have 10-20 per second
Wikipedia seems to be 30000 to 70000 per second spread over 300 servers (100 to 200 requests per second per machine, most of which is caches)
Geograph is getting 7000 images per week (1 upload per 95 seconds)
Not sure anyone is still interested, but this information was posted about Twitter (and here too):
The Stats
Over 350,000 users. The actual numbers are as always, very super super top secret.
600 requests per second.
Average 200-300 connections per second. Spiking to 800 connections per second.
MySQL handled 2,400 requests per second.
180 Rails instances. Uses Mongrel as the "web" server.
1 MySQL Server (one big 8 core box) and 1 slave. Slave is read only for statistics and reporting.
30+ processes for handling odd jobs.
8 Sun X4100s.
Process a request in 200 milliseconds in Rails.
Average time spent in the database is 50-100 milliseconds.
Over 16 GB of memcached.
When I go to the control panel of my webhost, open up phpMyAdmin, and click on "Show MySQL runtime information", I get:
This MySQL server has been running for 53 days, 15 hours, 28 minutes and 53 seconds. It started up on Oct 24, 2008 at 04:03 AM.
Query statistics: Since its startup, 3,444,378,344 queries have been sent to the server.
Total 3,444 M
per hour 2.68 M
per minute 44.59 k
per second 743.13
That's an average of 743 mySQL queries every single second for the past 53 days!
I don't know about you, but to me that's fast! Very fast!!
personally, I like both analysis done every time....requests/second and average time/request and love seeing the max request time as well on top of that. it is easy to flip if you have 61 requests/second, you can then just flip it to 1000ms / 61 requests.
To answer your question, we have been doing a huge load test ourselves and find it ranges on various amazon hardware we use(best value was the 32 bit medium cpu when it came down to $$ / event / second) and our requests / seconds ranged from 29 requests / second / node up to 150 requests/second/node.
Giving better hardware of course gives better results but not the best ROI. Anyways, this post was great as I was looking for some parallels to see if my numbers where in the ballpark and shared mine as well in case someone else is looking. Mine is purely loaded as high as I can go.
NOTE: thanks to requests/second analysis(not ms/request) we found a major linux issue that we are trying to resolve where linux(we tested a server in C and java) freezes all the calls into socket libraries when under too much load which seems very odd. The full post can be found here actually....
http://ubuntuforums.org/showthread.php?p=11202389
We are still trying to resolve that as it gives us a huge performance boost in that our test goes from 2 minutes 42 seconds to 1 minute 35 seconds when this is fixed so we see a 33% performancce improvement....not to mention, the worse the DoS attack is the longer these pauses are so that all cpus drop to zero and stop processing...in my opinion server processing should continue in the face of a DoS but for some reason, it freezes up every once in a while during the Dos sometimes up to 30 seconds!!!
ADDITION: We found out it was actually a jdk race condition bug....hard to isolate on big clusters but when we ran 1 server 1 data node but 10 of those, we could reproduce it every time then and just looked at the server/datanode it occurred on. Switching the jdk to an earlier release fixed the issue. We were on jdk1.6.0_26 I believe.
That is a very open apples-to-oranges type of question.
You are asking
1. the average request load for a production application
2. what is considered fast
These don't neccessarily relate.
Your average # of requests per second is determined by
a. the number of simultaneous users
b. the average number of page requests they make per second
c. the number of additional requests (i.e. ajax calls, etc)
As to what is considered fast.. do you mean how few requests a site can take? Or if a piece of hardware is considered fast if it can process xyz # of requests per second?
Note that hit-rate graphs will be sinusoidal patterns with 'peak hours' maybe 2x or 3x the rate that you get while users are sleeping. (Can be useful when you're scheduling the daily batch-processing stuff to happen on servers)
You can see the effect even on 'international' (multilingual, localised) sites like wikipedia
less than 2 seconds per user usually - ie users that see slower responses than this think the system is slow.
Now you tell me how many users you have connected.
You can search "slashdot effect analysis" for graphs of what you would see if some aspect of the site suddenly became popular in the news, e.g. this graph on wiki.
Web-applications that survive tend to be the ones which can generate static pages instead of putting every request through a processing language.
There was an excellent video (I think it might have been on ted.com? I think it might have been by flickr web team? Does someone know the link?) with ideas on how to scale websites beyond the single server, e.g. how to allocate connections amongst the mix of read-only and read-write servers to get best effect for various types of users.
I have a customer that uses our software on a commercial web app servers. The software runs on 40 servers. The software is a 10 year old Java API.
4000 TPS.