I am trying to understand how heroku works. Ok I get 1 dyno free, and if I register a domain I can have my own website, example "www.mysite.com" on heroku. I prefer heroku because uploading rails app is really easy. But how fast is 1 dyno? Why 1 dyno is free and 2 are 35 dollar? So I have to pay minimum 35 dollars per month when I can get hosting like http://www.site5.com which is 5 dollars per month? Can somebody clarify this for me?
They do not make it obvious in the documentation, but a single dyno will go into a suspended state after 30 minutes of inactivity. This saves them a bit of money, but comes with the trade-off that the first request it receives will have a bit higher latency because the dyno has to wake up.
In my experience, a single Heroku dyno performs better than an Amazon EC2 micro instance, which has ~600MB of RAM.
Once you upgrade to two dynos, they will no longer go into this suspended state.
Related
We're running a 7-node redis cluster, with all nodes as masters (no slave replication). We're using this as an in-memory cache, so we've commented out all saves in redis.conf, and we've got the following other non-defaults in redis.conf:
maxmemory: 30gb
maxmemory-policy allkeys-lru
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-require-full-coverage no
The client for this cluster is a spring-boot rest api application, using spring-data-redis with jedis as the driver. We mainly use the spring caching annotations.
We had an issue the other day where one of the masters went down for a while. With a single master down in a 7-node cluster we noted a marked increase in the average response time for api calls involving redis, which I would expect.
When the down master was brought back online and re-joined the cluster, we had a massive spike in response time. Via newrelic I can see that the app started making a ton of redis cluster calls (newrelic doesn't tell me which cluster subcommand was being used). Our normal avg response time is around 5ms; during this time it went up to 800ms and we had a few slow sample transactions that took > 70sec. On all app jvms I see the number of active threads jump from a normal 8-9 up to around 300 during this time. We have configured the tomcat http thread pool to allow 400 threads max. After about 3 minutes, the problem cleared itself up, but I now have people questioning the stability of the caching solution we chose. Newrelic doesn't give any insight into where the additional time on the long requests is being spent (it's apparently in an area that Newrelic doesn't instrument).
I've made some attempt to reproduce by running some jmeter load tests against a development environment, and while I see some moderate response time spikes when re-attaching a redis-cluster master, I don't see anything near what we saw in production. I've also run across https://github.com/xetorthio/jedis/issues/1108, but I'm not gaining any useful insight from that. I tried reducing spring.redis.cluster.max-redirects from the default 5 to 0, which didn't seem to have much effect on my load test results. I'm also not sure how appropriate a change that is for my use case.
We are facing problem with one of our Drupal sites hosted in apache. Today suddenly, there are 150 httpd2-prefork processes running. This causes site to be down. After every apache restart it comes up for a short while. Then again, number of processes threads grows to max and site goes down. Can anyone else help here?
Look at your server logs and see who your visitors are (real people? bots?), where they're coming from (IP), and what pages they're visiting.
If it's just a few offenders, you can try blocking them. If it's a distributed attack, it's going to be more difficult.
Do you have any modules installed to help block bad people/things? I recommend: Honeypot and Bad Behavior.
We have a problem with Weblogic 10.3.2. We install a standard domain with default parameters. In this domain we only have a managed server and only running a web application on this managed server.
After installation we face performance problems. Sometimes user waits 1-2 minutes for application responses. (Forexample user clicks a button and it takes 1-2 minutes to perform GUI refresh. Its not a complicated task.)
To overcome these performance problems define parameters like;
configuraion->server start->arguments
-Xms4g -Xmx6g -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=500
And also we change the datasource connection pool parameters of the application in the weblogic side as below.
Initial Capacity:50
Maximum Capacity:250
Capacity Increment: 10
Statement Cache Type: LRU
Statement Cache Size: 50
We run Weblogic on 32 GB RAM servers with 16 CPUs. %25 resource of the server machine is dedicated for the Weblogic. But we still have performance problem.
Our target is servicing 300-400 concurrent users avoiding 1-2 minutes waiting time for each application request.
Defining a work manager can solve performance issue?
My datasource or managed bean definition is incorrect?
Can anyone help me?
Thanks for your replies
Apache httpd has done me well over the years, just rock solid and highly performant in a legacy custom LAMP stack application I've been maintaining (read: trying to escape from)
My LAMP stack days are now numbered and am moving on to the wonderful world of polyglot:
1) Scala REST framework on Jetty 8 (on the fence between Spray & Scalatra)
2) Load balancer/Static file server: Apache Httpd, Nginx, or ?
3) MySQL via ScalaQuery
4) Client-side: jQuery, Backbone, 320 & up or Twitter Bootstrap
Option #2 is the focus of this question. The benchmarks I have seen indicate that Nginx, Lighthttpd, G-WAN (in particular) and friends blow away Apache in terms of performance, but this blowing away appears to manifest more in high-load scenarios where the web server is handling many simultaneous connections. Given that our server does max 100gb bandwidth per month and average load is around 0.10, the high-load scenario is clearly not at play.
Basically I need the connection to the application server (Jetty) and static file delivery by the web server to be both reliable and fast. Finally, the web server should double duty as a load balancer for the application server (SSL not required, server lives behind an ASA). I am not sure how fast Apache Httpd is compared to the alternatives, but it's proven, road warrior tested software.
So, if I roll with Nginx or other Apache alternative, will there be any difference whatsoever in terms of visible performance? I assume not, but in the interest of achieving near instant page loads, putting the question out there ;-)
if I roll with Nginx or other Apache alternative, will there be any difference whatsoever in terms of visible performance?
Yes, mostly in terms of latency.
According to Google (who might know a thing or tow about latency), latency is important both for the user experience, high search-engine rankings, and to survive high loads (success, script kiddies, real attacks, etc.).
But scaling on multicore and/or using less RAM and CPU resources cannot hurt - and that's the purpose of these Web server alternatives.
The benchmarks I have seen indicate that Nginx, Lighthttpd, G-WAN (in particular) and friends blow away Apache in terms of performance, but this blowing away appears to manifest more in high-load scenarios where the web server is handling many simultaneous connections
The benchmarks show that even at low numbers of clients, some servers are faster than others: here are compared Apache 2.4, Nginx, Lighttpd, Varnish, Litespeed, Cherokee and G-WAN.
Since this test has been made by someone independent from the authors of those servers, these tests (made with virtualization and 1,2,4,8 CPU Cores) have clear value.
There will be a massive difference. Nginx wipes the floor with Apache for anything over zero concurrent users. That's assuming you properly configure everything. Check out the following links for some help diving into it.
http://wiki.nginx.org/Main
http://michael.lustfield.net/content/dummies-guide-nginx
http://blog.martinfjordvald.com/2010/07/nginx-primer/
You'll see improvements in terms of requests/second but you'll also see significantly less RAM and CPU usage. One thing I like is the greater control over what's going on with a more simple configuration.
Apache made a claim that apache 2.4 will offer performance as good or better than nginx. They made a bold claim calling out nginx and when they made that release it kinda bit them in the ass. They're closer, sure, but nginx still wipes the floor in almost every single benchmark.
I have an application that is load balanced across two web servers (soon to be three) and deployments are a real pain. First I have to do the database side, but that breaks the production code that is running - and if I do the code first the database side isn't ready and so on.
What I'm curious about is how everyone here deploys to a load balanced cluster of X servers. Since publishing the code from test to prod takes roughly 10 minutes per server (multiple services and multiple sites) I'm hoping someone has some insight into the best practice.
If this was the wrong site to ask (meta definitely didn't apply - wasn't sure if serverfault did as I'm a dev doing the deployment) I'm willing to re-ask elsewhere.
I use nant scripts and psexec to execute them.
Basically in the farm there's a master server that copies the app and db scripts locally and then executes a deployment script in each server in the farm, that copies the code locally, modifies it if needed takes the app offline deploys the code and takes the app online
Usually the app is of for about 20 seconds (5 nodes)
Also, I haven't tried it but I hear a lot about MSDeploy.
Hope this helps
Yeah, if you want to do this with no downtime you should look into HA (High Availability) techniques. Check out a book by Paul Bertucci - I think it's called SQL Server High Availability or some such.
Otherwise, put up your "maintenance" page, take all your app servers down, do the DB and one app server first, then go live and do the other two offline.