I have a Rails 3.2 application running on production server. The server has 8 GB of RAM and every other process works fine. But, there is a ruby process which keeps the memory utilization on the higher side. I have to manually login to the server console and type the TOP command and kill the process using the PID.
But, I am unable to figure out how to check which ruby process is taking so much of memory and also how to control it permanently.
Please suggest me a solution.
Thanks.
Could be so many things. Finding memory leaks is tough. What kind of application server are you using? If you're using Unicorn consider checking out Puma. It's actually really easy to switch over. We saw big gains in our app when we switched to Puma.
Also look through your app for n+1 queries. Optimizing some queries here and there would help tremendously.
Another thing you could consider trying is moving some longer running tasks to a background job with something like sidekiq.
Lots of performance monitoring services out there, like New Relic, that you could check out as well. Without more info it's a tough question to answer.
I have been using IntelliJ IDEA 12 for developing a Java Applications. I have the best experience for an IDE. It was working fine until recently. It started to show the heap size memory issue recommending to increase the Xmx and ask me to ignore or shutdown. The behaviour is weird as the IDE starts at 300 MB then it starts to take more memory until it reaches 750+ MB that's when it shows the problem.
I switched back to eclipse and the memory foot print is stable at 300 MB and doesn't increase by time like IntelliJ
Is IntelliJ doing some background process related to my code causing this increase? or is it a memory leakage problem with the IDE?
I've used IDEA for 10 years (and used IDEA 12 for a year before switching to IDEA 13 EAP builds) and have never had a memory issue. And I do not see any consistent mention of memory issues in the IDEA forums.
That said, a memory leak was just fixed (as in released today) in the IDEA 13 EAP. The VcsLogGraphTable class had a leak. The ticket does not give any indication if the leak was/is present in IDEA 12. Based on the name of the class, it should only come into play for Git or Hg graphs (but Hg graphs were added in 13). Based on my experience with how they do tickets, I interpret this as an IDEA 13 issue.
First, make sure you are using the latest version 12.1.6.
Often times memory issues are a result of a poorly written third party plug-in. You can try to disable any third party plug-ins and see if the issue is resolved.
The other thing you can do is follow the instructions in the document How to report IntelliJ IDEA performance problems and take CPU snapshots and report the issue to JetBrains. That way they can confirm a leak in IDEA 12, or tell you what plug-in is the culprit.
Virtual Machine:
4CPU
10GB RAM
10GB swap
Java 1.7
-Xms=-Xmx=6144m
Tomcat 7
We observed a very strange behaviour with the JVM. The JVm resident memory began to shrink and the swap usage shot up to over 50%.
Please see below stats from monitoring tools.
http://i44.tinypic.com/206n6sp.jpg
http://i44.tinypic.com/m99hl0.jpg
Any pointers to understand this is grateful.
Thanks!
Or maybe your Java program was idle and it didn't need that memory, and you have high swappiness? In such situation your OS would free RAM just in case and leave only used part.
In my opinion, that is actually good behaviour, why should you waste RAM for process that won't use it?
Unless you run only this one process on VM, then it would be quite good idea to set swappiness to 0 or other small number - this memory was given to this single process, so we may disable swapping it.
Thanks for the response. Yes this is more close to a system troubleshooting than Java but I thought this the right forum to initiate this topic incase anybody has seen such a phenomena with JVM.
Anyways, I had already checked the top and no there was no other process than Java which was hungry for memory. Actually the second top process was utilizing 72MB (RSS).
No the swappiness is not aggressive set on this system but at default 60. One additional information I missed to share is we have 4 app servers in cluster and all showed this behaviour exactly at the same time. AFAIK, JVM does not swap out but the OS would. But all of it is what confusing me.
All these app servers are production and busy serving request so not idle. The used Heap size was at Avg 5 GB used of the the 6GB.
The other interesting thing I found out were some failed messages in the Vmware logs at the same time which is what I'm investigating.
A week ago my computer start freezing every couple of seconds to 30sec-2minutes.
So i open my proccess explorer to monitor it to see if i get some CPU spikes and if so, which application is causing it.. after some freezes i noticed non of my programs/services is causing the freezes.
so i tried to check if any of my fans aren't working.. but all fans are working great.
adventually i ran the chkdsk scan (in the way i had tons of crashes/ startup problems/ i even couldnt run the windows installation disk due to a memory diagnostic problems.. I HAD Really lots of lots of problems)
adventually i found the problem, it's appear my DW hard drive is faulty and here the hard drive results:
http://pastie.org/2949300
now i'm searching the web for a tool that could fix all it's problems because i really need the drive to work.
Windows 7 ultimate 64bit.
intel e6320
4gb ddr2
ati hd5450.
Please help me if you can guide me what can i do to fix it.. (my os is on it)
Buy a new hard drive, install windows on that and see what you can read of the old disk. You're getting read and write errors in chkdsk, crashes etc, the disk is on the way out.
First of all, try to get a backup of your harddrive / your data. All actions you´re performing right now can lead to a data loss.
I don´t know if there are a web tool for fixing these problems - normally, a extended chkdsk (/r /p) should´ve fix the problems. Your log shows insufficient space on the partition. Can you move some files on another disk and try to run chkdsk again?
Our network team is thinking of setting up a virtual desktop environment (via Windows 2008 virtual host) for each developer.
So we are going to have dumb terminals/laptops and should be using the virtual desktops for all of our work.
Ours is a Microsoft shop and we work with all versions of .net framework. Not having the development environments on the laptops is making the team uncomfortable.
Are there any potential problems with that kind of setup? Is there any reason to be worried about this setup?
Unless there's a very good development-oriented reason for doing this, I'd say don't.
Your developers are going to work best in an environment they want to work in. Unless your developers are the ones suggesting it and pushing for it, you shouldn't be instituting radical changes in their work environments without very good reasons.
I personally am not at all a fan of remote virtualized instances for development work, either. They're often slower, you have to deal with network issues and latency, you often don't have as much control as you would on your own machine. The list goes on and on, and little things add up to create major annoyances.
What happens when the network goes down? Are your dev's just supposed to sit on their hands? Or maybe they could bring cards and play real solitare...
Seriously, though, Unless you have virtual 100% network uptime, and your dev's never work off-site (say, from home) I'm on the "this is a Bad Idea" side.
One option is to get rid of your network team.
Seriously though, I have worked with this same type of setup through VMWare and it wasn't much fun. The only reason why I did it was because my boss thought it might be worth a try. Since I was newly hired, I didn't object. However, after several months of programming this way, I told him that I preferred to have my development studio on my machine and he agreed.
First, the graphical interface isn't really clear with a virtual workstation since it's sending images over the network rather than having your video card's graphical driver render the image. Constant viewing of this gave me a headache.
Secondly, any install of components or tools required the network administrator's help which meant I had to hurry up and wait.
Third, your computer is going to process one application faster than your server is going to process many apps and besides that, it has to send the rendered image over the network. It doesn't sound like it slows you down but it does. Again, hurry up and wait.
Fourth, this may be specific to VMWare but the virtual disk size was fixed to 4GB which to my network guy seemed to think it was enough. This filled up rather quickly. In order for me to expand the drive, I had to wait for the network admin to run partition magic on my drive which screwed it up and I had to have him rebuild my installation.
There are several more reasons but I would strongly encourage you to protest if you can. Your company is probably trying to impliment this because it's a new fad and it can be a way for them to save money. However, your productivity time will be wasted and that needs to be considered as a cost.
Bad Idea. You're taking the most critical tool in your developers' arsenal and making it run much, much, much slower than it needs to, and introducing several critical dependencies along the way.
It's good if you ever have to develop on-site, you can move your dev environment to a laptop and hit the road.
I could see it being required for some highly confidential multiple client work - there is a proof that you didn't leak any test data or debug files from one customer to another.
Down sides:
Few VMs support multiple monitors - without multiple monitors you can't be a productive developer.
Only virtualbox 3 gets close to being able to develop for opengl/activeX on a VM.
In my experience Virtual environments are ideal for test environments (for testing deployments) and not development environments. They are great as a blank slate / clean sheet for testing. I think the risk of alienating your developers is high if you pursue this route. Developers should have all the best tools at their disposal, i.e. high spec laptop / desktop, this keeps morale and productivity high.
Going down this route precludes any home-working which may or may not be an issue. Virtual environments are by their nature slower than dedicated environments, you may also have issues with multiple monitor setups on a VM.
If you go that route, make sure you bench the system aggressively before any serious commitment.
My experience of remote desktops is that it's ok for occasional use, but seldom sufficient for intensive computations and compilation typical of development work, especially at crunch time when everyone needs resources at the same time.
Not sure if that will affect you, but both VMWare and Virtual PC work very slow when viewed via Remote Desktop. For some reason Radmin (http://www.radmin.com/ ) does a much better job.
I regularly work with remote development environments and it is OK (although it takes some time to get used to keep track in which system you're working at the moment ;) ) - but most of the time I'm alone on the system.