Is it normal that my Grails application is using more than 200 MB memory at startup? - optimization

My Grails application is running in a development environment. I still didn't go into production, but in any case, is it normal that my Grails application is requiring 230 MB at startup only (with an empty bootstrap and no request handled so far)?
Do you know why this is the case, how to improve memory usage in development mode and, most important, whether it is reduced in production environment?

To answer your questions, yes - it is normal. It's especially normal if you have a lot GSPs in your application. GSPs are runtime compiled so you can speed up their generation by increasing your permgen space.
You can improve memory use and performance in general by making sure that you are passing the '-server' flag when you load your server JVM.

I wouldn't blame all that memory usage just on Grails. Because it uses an embedded Tomcat (Jetty in older versions) there will be a decent amount of overhead even when running an empty application.
IMO, 230MB is a lot of memory use for a Java application. High memory usage is just part of life when writing jvm based applications.

My online Grails applications run in a VPS with only 512MB (which includes a Drupal CMS, Apache, the email services, ... and the Tomcat to run GRails) so you can definitely tune your application to use less memory

Related

Limit Process Memory Usage

A question related to the .NET process and in this case the dotnet command.
Here's the scenario, I'm running various .NET Core web applications on an Ubuntu server using Kestrel - Apache as a reverse proxy following the Microsoft Guide.
The question is: Is there a way to limit the memory used by each dotnet process or a workaround to do it so? Before any question, the reason why limiting the memory is: that I have a Web Application running Hangfire Server which consumes a lot of memory and it's not limited and I would like to better control the memory allocated to any application.
I tried to look into runtimeconfig.json file but I didn't find anything.
If there is no way, maybe some way to limit the memory of a Linux process could do the trick?
Many thanks!

Bottle WSGI server vs Apache

I don't actually have any problem, just a bit curious of things.
I make a python web framework based on bottle (http://bottlepy.org/). Today I try to do a bit comparison to compare bottle WSGI server and apache server performance. I work on lubuntu 12.04, using apache 2, python 2.7, bottle development version (0.12) and get this surprising result:
As stated in the bottle documentation, the included WSGI Server is only intended for development purpose. The question is, why the development server is faster than the deployment one (apache)?
As far as I know, development server is usually slower, since it provide some "debugging" features.
Also, I never has any response in less than 100 ms when developing PHP application. But look, it is just 13 ms in bottle.
Can anybody please explain this? This is just doesn't make sense for me. A deployment server should be faster than the development one.
Development servers are not necessarily faster than production grade servers, so such an answer is a bit misleading.
The real reason in this case is likely going to be due to lazy loading of your web application on the first request that hits a process. Especially if you don't configure Apache correctly, you could hit this lazy loading quite a bit if your site doesn't get much traffic.
I would suggest you go watch my PyCon talk which deals with some of these issues.
http://lanyrd.com/2013/pycon/scdyzk/
Especially make sure you aren't using prefork MPM. Use mod_wsgi daemon mode in preference.
A deployment server should be faster than the development one.
True. And it generally is faster... in a "typical" web server environment. To test this, try spinning up 20 concurrent clients and have them make continuous requests to each version of your server. You see, you've only tested 1 request at a time--certainly not a typical web environment. I suspect you'll see different results (we're thinking of both latency AND throughput here) with tens or hundreds of concurrent requests per second.
To put it another way: At 10, 20, 100 requests per second, you might still see ~200ms latency from Apache, but you'd see much worse latency from Bottle's server.
Incidentally, the Bottle docs do refer to concurrency:
The built-in default server is based on wsgiref WSGIServer. This
non-threading HTTP server is perfectly fine for development and early
production, but may become a performance bottleneck when server load
increases.
It's also worth noting that Apache is doing a lot more than the Bottle reference server is (checking .htaccess files, dispatching to child process/thread, robust logging, etc.) and all those features necessarily add to request latency.
Finally, I'd ask whether you tuned the Apache installation. It's possible that you could configure it to be faster than it is now, e.g. by tuning the MPM, simplifying logging, disabling .htaccess checks.
Hope this helps. And if you do run a concurrent benchmark, please do share the results with us.

Jruby Warble executable performance

I am developing a JRuby on Rails app that needs to be deployed to clients servers. We want be able to compile the app so that the source can not be read and copied (easily). From what I've read Warbler seems to be the way to go.
My concern is the performance of the app in standalone mode. Meaning just runing as "java -jar MyApp.war" as opposed to using Glassfish..Tomcat..etc. The distributed app wont be high traffic, maybe 20-30 users max. If anything it'd be more heavy on the db side which is a separate issue.
So how does this type of scenario compare performance wise with running with an actual server?
Using Glassfish, Tomcat JBoss (or Torquebox) will perform just as good as long as the JVM has enough memory.
You will need to tweak loading\compiling the assets depending on the deployment server.
If it's supposed to be a web app then you will need the war\tomcat. If it should be a desktop app then just use the jar version.

Run NativeProcess from AIR on a *different core* to the AIR application

My application can be fairly CPU-intensive, as can the server I launch from my application using NativeProcess.
The problem is that they're both using the one core. On a quad-core machine, they both slow to a crawl as they're severely limited on their CPU share.
Is there any way to launch a native process on a different core, or in a way that won't result in such a shared, throttled bottleneck?
If you already using NativeProcess, you could also set CPU affinity in platform specific way.

Stress testing a server and VPS's vs. Dedicated servers

We used to have a dedicated server (1&1) and very infrequently ran into problems with the server having issues.
Recently, we migrated to a VPS (Wiredtree.com) with similar specs to our old dedicated server, but notice frequent problems running out of memory, mysql having to restart, etc... both when knowingly running intensive scrips and also just randomly during normal use.
Because of this, we're considering migrating to another at VPS - this time at Slicehost to see if it performs better.
My question is two fold...
Are their straightforward ways we could stress test a VPS at Slicehost to see if the same issues occur without having to actually migrate everything over?
Also, is it possible that the issues we're facing aren't just because of the provider (Wiredtree) but just the difference between a dedicated box and VPS (despite having similar specs)?
The best way to stress test an environment is to put it under load. If this VPS is hosting a web application, use one of the many available web server benchmark tools: ab, httperf, Siege or http_load. You don't necessarily care that much about the statistics from the tool itself, but more that it puts a predictable load on the server so that you can tune Apache to handle it, or at least not crash and burn.
The one problem you have with testing against Slicehost is that you are at the mercy of the Internet and your bandwidth to Slicehost. You may not be able to put enough load on the server to reach a meaningful conclusion.
Instead, you might find it just as valuable to run one of the many virtualization products on the market and set up a VM with comparable specs to the VPS plan you're considering. Local testing over your LAN will allow you to put a higher and more predictable load on the server.
In either case, you don't need to migrate everything, but you will need to set up an environment for your application to run in, with representative data in your database.
A VPS with similar specs to a dedicated server should perform approximately the same, but in order to get good performance, you still need to tune Apache, MySQL and any other long-lived server processes. In my experience, the out-of-the-box configuration of Apache in many Linux distributions is not ideal and will allow far too many child processes, overcommitting memory and sending the server into a swap-death spiral.