My IntelliJ 13.1.5 constantly indexes my project which really slows my machine down. It does it when I rebuild my project as well as when I start my jetty server.
Does anybody know how to disable or at least limit that behavior?
The previous version didn't do that so often.
Actually, I found what was wrong.
Once of my modules didn't have the target folder excluded and that was causing IntelliJ to always index and since that module is big it would take forever to index it.
Solution:
Go to "Project Structure" -> "Modules" and excluded all target folders.
Starting from IntelliJ 2017.2, indexing can at least be paused:
To other unfortunate souls working for enterprise mostly on VDI-s without an SSD: Idea actually parses/indexes a lot more then your project folders. Likely candidates that makes your whole day a rant session:
Libraries and Linters specified at a global level. For example "Languages & Frameworks/ Javascript/ Libraries" or "TypeScript / TsLint / TsLint Packages". If you work in multiple languages then this can bloat your index quite a lot. Its usually much better to open just one tiny bit from a project related to what your are working on to keep the index as small as possible.
as mentioned before: target, node_modules folders
dist, mock, resource folders
Do not open multiple projects/ modules in the same project scope. I theory this saves you time because you dont have to wait to reopen the given module in an other window, but the reality is that you just adding more stuff to index. If you happen to git pull a project with 5-6 different modules your idea will go into stasis for half an hour to index all the changes.
Try Invalidating the cache and restarting IntelliJ.
I had similar issue it solve with :
IntelliJ IDEA caches a great number of files, therefore the system cache may one day become overloaded. In certain situations the caches will never be needed again, for example, if you work with frequent short-term projects. Also, the only way to solve some conflicts is to clean out the cache.
To clean out the system caches:
On the main menu, choose File | Invalidate Caches/Restart. The Invalidate Caches message
Source link.
Related
I'm a Community College instructor grading student C++ coding assignments. Been doing the same task all semester. Suddenly, this morning, CLion is building extremely slowly, perhaps even hanging, the second time I build/run a project. WTF? The projects are very small. One source file, one header, no libraries.
What changed? And why would a second build be the problem? It's usually first builds that are slow.
What changed? My harddrive backup software. I told my auto-backup to take 2 hours off and have had no problem since.
I was recently forced to switch from the Code42 CrashPlan software I've been using for years to Carbonite. Code42 is getting out of the end-user backup business.
Note that I believe this problem is at least 95% user error, in how I configured my backup, and max 5% anything to do with Carbonite's implementation. Maybe their file locking strategy is different from CrashPlan's; I don't know.
I did think twice before I configured my CLion Projects folder to be backed up. I knew that backing up object files would be a waste of backup cycles/space. But I was in a hurry and wanted my solution source code to get backed up by Carbonite until I can check it into a repository of some sort at the end of the semester. I'm pretty sure I can go in and refine my backup strategy to NOT include the object/executable folders.
On both Xcode 4.4 and 4.4.1 I'm experiencing the same issue in that with the specific project I'm working on, I don't seem to be able to rename any classes or variables from the Refactor menu option.
Each time I try and do a rename, I type in the new name for the class/variable and click Preview at which point the bottom left begins a spinner with Finding files.... However, I then get a message saying:
The selection is not a type that can be renamed.
Make a different selection and try again.
I'm pretty sure that this is not an issue with my specific install of Xcode, because I can refactor other projects fine, it's just that I can't seem to be able to refactor this specific project.
Anyone with any ideas? I don't have any particularly exotic configuration for this project, it just seems to be a random affliction. I've deleted all of my derived data and re-indexed, but that doesn't seem to help.
Since it works OK in other projects, I'm thinking one thing I could try to do is re-generate the actual project file(s) itself. I don't know if there is a way to do this automatically?
If they're in dropbox get them out of there. It mangles project files. I've had it happen numerous times and at times it makes refactoring > renaming not work.
I have managed to solve this issue after trying many different things (tweaking project settings, pch, etc.) and it turns out there was a very simple (and totally counter-intuitive) method of fixing this issue.
All I have done is:-
Copy my entire project folder (so from Project to Project Copy).
Move Project (the original folder) to trash.
Rename Project Copy to Project.
Mysteriously, everything now works fine.
I really cannot figure out why this works. As mentioned previously, I had already deleted all derived data, etc. so I don't know why this should make things spontaneously work, but it does.
Would appreciate anyone who is able to shed some light on this as it does expose just how fiddly Xcode can be, and any understanding of what goes on under the hood is always beneficial.
Sounds like a buggered index.
I usually use the nuke from space option to delete everything in the derived data directory.
Unless you have changed it (I change mine to /tmp/bbum-derived), it'll be at:
~/Library/Developer/Xcode/DerivedData
Thus, I'll quit Xcode and do:
rm -rf ~/Library/Developer/Xcode/DerivedData
Yes, it is a bit brute force, but it works. You can likely force Xcode to rebuild the index from the UI, but I never bother. Of course, I'm also installing quite a few "odd" builds of this and that as a part of my day job...
(that is an rm -rf. It means "nuke everything and don't ask" in unix parlance. It is dangerous. Do not mistype that command.)
It seems you have an active selection somewhere in the gui, perhaps some of your files or classes are selected ? Try unselect in every sub window and retry refactoring.
I'm a bit late to this thread, but I ran into the same problem today and I was able to get it to finally refactor correctly, thought I share it.
So in large part I did what bbum said, I closed xCode, nuked the Derived data for the project the class files were in and re opened the project. Doing just that, it didn't work; the key, I found (at least for me), is that I had to do a clean (command shift k) after xCode restarts. After that I was able to rename the class files again :)
Also as a side note, my project is divided into the main project, and a static library. When I had to rename classes in the static library, I had to quit the main project and do what I described in the static library itself. Somehow I got the same error described in the question when I tried to do the refactor/rename from the main project.
Good luck!
This thread was very helpful for me in determining the problem.
It turned out that I had to Repair Disk with Disk Utility. I had visited a site earlier that had hijacked Safari and was telling me to call a number for emergency repairs, an obvious scam.
I followed the Disk Utility instructions to repair disk (including restarting with CMD-R pressed). Another clue was that I tried to commit to git and Xcode said No Way, Jose.
Afterwards I was able to refactor and commit changes as if nothing ever happened. I hope this helps someone else as a possible cause to investigate.
I really like IDEA, but when I work with a webapp running on Tomcat and I modify only a single java class file, I have to do an update classes and resources and it takes much more time to do it than in eclipse. In eclipse it's instant, at least I don't notice anything, in IDEA it does a make and updates caches and I don't know what else but it's really annoying.
Why is that and how can I solve this?
Update would depend on your project and its configuration in IDEA. Normally it should not take too long as only the required steps are performed. Compilation is incremental and would be instant. In order to understand why it takes long for your project, we'll need the sample project and the exact steps to reproduce it, please file an issue to our issue tracker.
If you want really fast updates, you may consider using JRebel, it has plug-in for IDEA.
Not so with IntelliJ 10.x. Updates don't require a complete build and redeployment. Try the new version.
I am not sure but you can actually check your Project Settings. There in the modules section you can mark some of your unnecessary folders as excluded.
This might speed up your process as the unnecessary files are now not been indexed.
I've been looking for an IDE or Text Editor that can save "snapshots" of my code. I have been coding in a lot of new languages lately using a lot of trial and error. I often find myself wanting to revert a file or multiple files because of a coding/design decisions made in the last couple of hours or even days. It would be nice if I could take a snapshot of my code periodically and reverting to a past snapshot rather than manually making copies of my files on intervals.
I'd imagine that there has to be atleast an editor that shows version-history of a file. ( I realize I could use git or svn or any other versioning solution, but I'd like a more automated process.)
(if not an IDE, does anyone know how to configure Windows-ShadowCopy, OSX-TimeMachine, to make backups of my development folder on 45min intervals... or even a third-party program.)
Eclipse takes a local history on a per file basis... but personally I would strongly suggest using source control for this. If you use something like git or Mercurial, your commits are all local anyway - and it means you'll have a consistent snapshot at moments where you believe you've reached a useful point.
With a bit of experience it only takes a few seconds to commit your current work, and I think it's likely to prove more useful over time than automatically snapshotting either every save or at random intervals.
(It's hard to know whether Eclipse will actually be useful to you, as you haven't specified which language you'll be programming in. Admittedly there are plugins for a fair number of languages in Eclipse...)
I am not an experimented user of SCM tools, even though I am convinced of their usefulness, of course.
I used some obscure commercial tool in a former job, Perforce in the current one, and played a bit with TortoiseSVN for my little personal projects, but I disliked having lot of .svn folders all over the place, making searches, backups and such more difficult.
Then I discovered the interest of distributed SCM and I chose to go the apparently simpler (than git) Mercurial way, still for my personal, individual needs. I am in the process of learning to use it properly, having read part of the wiki and being in the middle of the excellent PDF book.
I see often repeated, for example in Mercurial working practices, "don't hesitate to use multiple trees locally. Mercurial makes this fast and light-weight." and "for each feature you work on, create a new tree.".
These are interesting and sensible advices, but they hurt a bit my little habits with centralized SCM, where we have a "holy" central repository where branches are carefully planned (and handled by administrators), changelists must be checked by (senior) peers and must not break the builds, etc. :-) Starting to work on a new branch takes quite some time...
So I have two questions in the light of above:
How practical is it to do lot of clones, in the context of IDEs and such? What if the project has configuration/settings files, makefiles or Ant scripts or shell scripts or whatever, needing path updates? (yes, probably a bad idea...) For example, in Eclipse, if I want to compile and run a clone, I have to do yet another project, tweaking the Java build path, the Run/Debug targets, and so on. Unless an Eclipse plugin ease that task. Do I miss some facility here?
How do that scale? I have read Hg is OK for large code bases, but I am perplex. At my job, we have a Java application (well, several around a big common kernel) of some 2 millions of lines, weighting some 110MB for code alone. Doing a clean compile on my old (2004) Windows workstation takes some 15 minutes to generate the 50MB of class files! I don't see myself cloning the whole project to change 3 files. So what are the practices here?
I haven't yet seen these questions addressed in my readings, so I hope this will make a useful thread.
You raise some good points!
How practical is it to do lot of clones, in the context of IDEs and such?
You're right that it can be difficult to manage many clones when IDEs and other tools depend on absolute paths. Part of it can be solved by always using relative paths in your configuration files -- making sure that a source checkout can compile from any location is a good goal in itself, no matter what revision control system you use :-)
But when you cannot or dont want to bother with several clones, then please note that a single clone can cope with multiple branches. The "hgbook" emphasizes many clones since this is a conceptually simple and very safe way of working. When you get more experience you'll see that you can use multiple heads in a single repository (perhaps by naming them with bookmarks) to do the same.
How do that scale?
Cloning a 110 MB repository should be quite fast: it depends on how long it takes to write 110 MB to your disk. In a recent message to the Mercurial mailinglist it was reported that cloning 6.3 GB took 4 minutes -- scaling that down to 110 MB gives about 4 seconds. That should be fast enough that your tea is still warm :-) Part of the trick is that the history data are simply hard-linked (yes, also on Windows) and so it is only a matter of writing out the files in the working copy.
PhiLo: I'm new at this, but mercurial also has "internal branches" that you can use within a single repository instead of cloning it.
Instead of
hg clone toto toto-bug-434
you can do
cd toto
hg branch bug-434
hg update bug-434
...
hg commit
hg update default
to create a branch and switch back and forth. Your built files not under rev control won't go away when you switch branches, some of them will just go out of date as the underlying sources are modified. Your IDE will rebuild what's needed and no more. It works much like CVS or subversion.
You should still have clean 'incoming' and 'outgoing' repositories in addition to your 'work' repository. Just that your 'work' can serve multiple purposes.
That said, you should clone your work repo before attempting anything intricate. If anything goes wrong you can throw the clone away and start over.
Question 1:
PIDA IDE has pretty good Mercurial integration. We also use Mercurial for development itself. Personally I have about 15 concurrent clones going of some projects, and the IDE copes fine. We don't have the trouble of tweaking build scripts etc, we can "clone and go".
It is so easy that in many cases I will clone to the bug number like:
hg clone http://pida.co.uk/hg pida-345
For bug #345, and I am ready to fix.
If you are having to tweak build scripts depending on the actual checkout directory of your application, I might consider that your build scripts should be using some kind of project-relative path, rather than hard-coded paths.