ms-vscode.cpptools taking a ton of CPU usage - vscode-extensions

I'm working on Ubuntu and using MS Remote SSH as part of Remote Explorer through VSCode. When I installed C/C++ (ms-vscode.cpptools) extension, it takes up a ton of CPU%, typically around 95. I thought perhaps this is a one time thing, but it's constantly running at that percentage, making everything else (compiling the project) very slow.
I like the functionality of this extension, as I've used it on other machines before without issue. However I can't use it if it's staying at that level of usage. Is there any workaround for this? I've seen a few github debates, but nothing much coming from those.
From system monitor:

After trying lots of solutions, only one works for me - adding files.exclude in settings file.
When adding files.exclude, the following lines will be add by default:
"**/.git": true,
"**/.svn": true,
"**/.hg": true,
"**/.deps": true,
"**/CVS": true,
"**/.DS_Store": true,
then I add some files/folder, which are in workspace but they have nothing to do with IntelliSense, such as "**/*.Po", folder that stores test data (lots of files...), which are always been updated when I use my application, etc.
and I also add
"/bin": true,
"/boot": true,
"/cdrom": true,
"/dev": true,
"/proc": true,
"/etc":true
, but I am not sure if those lines help.
After updating the settings file, we should Restart IntelliSense for Active File and Reset IntelliSense Database by the Command Palette.
btw, after using the above solution, I haven't encountered long-running high CPU usage for over two months.

Related

vs2017 vb.net Compile Options' errors when changing build configurations

I have a VB.Net WPF application with Option Explicit and Option Strict compile options set to OFF!
However, whenever I change build configuration from Release to Debug, or vice versa, my error list fills with hundreds of errors related to Option Explicit/Strict being 'ON'.
I have to go back into project properties/compile and change the options (which still display as 'OFF' mind you), back to OFF again... (i.e. I click the drop-down and re-click the OFF option).
Why? I have to do this so many times a day! It is not exclusive to just a single app, but with all the apps I write.
Do note:... this does not happen every single time, but it does happen at least 75% of the time.
FYI.. in-case this is of any relevance: My solution is on a Network Drive/Share, and I use Windows 7.

Make PhpStorm watch (specific) file and upload upon change

I'm looking for a way for PhpStorm to watch a file - so that every time it changes, it should sync that file to the remote server. And the 'Upload external changes'-option (with the 'On explicit save action') is not working for me. It's close - but no cigar. It makes me save the css-file twice - and only after the second time I save the scss-file, then it uploads the gulp-compiled-style.css-file, as I would like it to do after the first 'save'. I know it's an itty-bitty thing - but for something that I easily do 200-400 times per day, I would like it to run as smooth as possible. It's both the time I spend saving the file twice - but it's also the doubt in my mind, every time something doesn't act like I want it to - then I always have to check first, if the file is uploaded properly.
I'm working with Wordpress and I have a 5-8 SCSS-files, that are being compiled using gulp. The gulp-procedures are quite comprehensive (autoprefixer, merge-media-queries, minifycss, etc., etc.), so it takes a couple of miliseconds for the file to be compiled. I assume that it's that compile-time that makes PhpStorm 'miss' that the style.css has changed - and therefore don't upload it on the first 'save'. Because sometimes everything is uploaded after the first 'save' - but it's only every 8th time (or so).
Extra fun fact (that may indicate where the body is burried): If i run the gulp watch from a terminal in the background, then I have to save the file, wait for at least 2-4 seconds and then save again, before the gulp-compiled-style.css-file is uploaded. If I press 'save', 'save', 'save', 'save' - with less than those 2-4 seconds pause, but with only 1 seconds pause, then PhpStorm never sync's the gulp-compiled-style.css-file.
If I run the gulp watch from PhpStorm's Gulp-integration (not the terminal, but the Gulp-prompt), then I can save twice in a row with only .5 second pause between (as soon as the progress bar in the bottom disappears) - and then it uploads (every time, - consistently).
It's only the upload of the gulp-compiled-style.css-file that is the problem. Everything else is working perfectly.
Here's what I have tried:
**Attempt1) I've previously used Atom, where the FTP-configuration was established using a plugin with a .ftpconfig-file. In that file, there was a 'watch: []'-parameter, where I could specify a file to 'watch'. That worked wonders!! I've tried finding a plugin that did something of the sort - but couldn't find it.
**Attempt2) I read one of the support-pages, concerning the same. It seems like that if the SCSS-files are compiled using PhpStorm, then it knows to upload the style.css-file (after it has been compiled). I can't setup my gulp-file using PhpStorms' SCSS-compilation (since it compiles it and then pipes the content through multiple procedures. And if I change projects and have to set that up every time, then that would be a pain to maintain.
**Attempt3) I thought about making a macro or something. And then seeing if I could remap CMD-s to 'save-current-file-and-execute-the-macro'. But then if I don't edit the scss-file (but just some php-file), then it would still upload the style.css-file. It's by no means a pretty solution, - but it just shows how far out in the ropes I've been, to find a solution to automate this.
**Attempt4) I thought about building it into the gulp-file, that it has to upload the style.css-file after compiling the style.css-file. I thought about getting the host and the username from the .idea-folder - and getting the password from the keychain and then establishing the sftp-connection that way. But it quickly became extensive to do this, so I hoped that there was an easier/better way.
Addition1
LazyOne asked, what I had tried with the File Watchers, so here goes. I tried making PhpStorm compile the Scss-file (so PhpStorm basically did what Gulp does for me today).
So that was a SCSS-filewatcher (file type), the scope was the SCSS-files in my project, pointed the 'Program' to my ruby-installed scss-file, etc., etc., etc.
However... I realized that it was difficult to get PhpStorm to do the same thing to the Scss-files as the Gulp-file does.
The Gulp-file that I'm using is supplied with each project (as a default) - and we're several webdesigners working on the same projects. So if I suddenly do something other that use that Gulp-file, then I'm pretty sure that I will have to be able to produce the exact same result, - otherwise I'm pretty sure that my colleagues will skin me alive. But I made it as far as to have this as my Arguments, before I gave up (I still need to do several things in this line, before it does what my Gulp-file does):
--no-cache --update $FileName$:../../style.css --style compressed --sourcemap=none
... However... When doing that, then I could conclude that the style.css-file was uploaded on every save (score!). Which means that if I can just setup some kind of File Watcher, then hopefully PhpStorm will watch that file and therefore upload it after Gulp has compiled the style.css.
So I tried to make a File Watcher, that watches the style.css-file - but I didn't know what to put as my 'Program'. Since basically I want PhpStorm to poke the style.css-file three times with a stick - and thereafter realize that the file has changed and upload it to the server. Is that possible to do automatically?
Ok. Here's how it's done!
The important part is that PhpStorm watches the style.css-file. It will do that, if you point a File Watcher to look at the file. Now, at first I tried running the Gulp-file with a File Watcher. Then I tried compiling the scss-files with the File Watcher. But that is not necessary. The important part is the 'Output paths to refresh' (as #LazyOne pointed out in the comments). So the best solution I've come to, is to make a bash-script that does nothing. It looks like this:
#!/bin/bash
sleep .01
I've called that 'donothing'. I don't have to do the sleep .01 for it to work, - but it doesn't stop it from working (based on the 20 upload tests I've done). And in my mind, it's better if there's a short delay, after the scss-files are saved (so the gulp-integration in PhpStorm have time to finish compiling the style.css-file).
Here's the setup of my File Watcher:
The scope is a custom-made one, and I've set it to this:
file[PROJECTNAME]:wp-content/themes/THEME_NAME/assets/sass/*
But I assume the scope could just be the entire project, since it's just looking for when SCSS-files change in that scope, before the 'Program' is executed (if I've understood this whole File Watcher thingy properly).
And I'm working on a project, where the SCSS-files are at
/wp-content/themes/THEME_NAME/assets/sass/STYLEDIRNAME/
And in this case, then the 'Output paths to refresh' should be set to something along these lines:
$FileDirRelativeToProjectRoot$/../../style.css
or
$ProjectFileDir$/wp-content/themes/THEME_NAME/style.css
And if you unfold 'Other options', then you have to set Working directory to be the root of the project, for it to work:
/Users/USERNAME/dropbox/foo/bar/
A downfall for this solution is, if PhpStorm are faster than the Gulp-task, - since it would then upload the style.css-file before the Gulp-task has finished compiling it (and therefore uploading the old or an incomplete file). That didn't happen during my 20 brief tests. But if it happens, then I would set a higher delay in the 'donothing'-file and see if that solves it.
Addition 1
I just experienced, that it wasn't working - that it didn't upload style.css upon every compilation/save (but I could swear that the setup was correct). So I restarted PhpStorm and then it worked. I tried restarting the gulp watch-process first, - but that didn't make a difference.
Another, much simpler solution is to set the File Watchers scope for SCSS files to "All Changed Files".
This is what my File Watcher setup looks like.
It compiles the .scss file, copies the minified .css into another folder, and then automatically uploads both files to the server if you have "Auto upload" enabled.
(You might note that I'm not using the SCSS compiler but the PHP-based PSCSS, which is about 70% faster and can be installed via composer global require scssphp/scssphp).

XDebug really slow

I am trying to get XDebug working on my local wamp installation (Uniform Server 8).
However when I put
xdebug.remote_enable=1
in my php.ini, which is required for my IDE to use xdebug, loading the pages gets really slow as in 5 seconds per page slow. The debugger works though.
I haven't used xdebug before but I can imagine that it normally shouldn't take this long. I'm pretty sure it might have something to do with using the symfony2 framework.
Does anyone have an idea what's causing this?
It's maybe because this is what it does!
Check the default storage place for xdebug logs (most of the times /tmp/xdebug/something)
which on Windows would be something different than on unix/linux systems.
set these in your php.ini if you want them placed/named somewhere else:
xdebug.profiler_output_dir
Type: string, Default value: /tmp
The directory where the profiler output will be written to, make sure that the user who the PHP will be running as has write permissions to that directory. This setting can not be set in your script with ini_set().
xdebug.profiler_output_name
Type: string, Default value: cachegrind.out.%p
This setting determines the name of the file that is used to dump traces into. The setting specifies the format with format specifiers, very similar to sprintf() and strftime(). There are several format specifiers that can be used to format the file name.
Generating these files is taxing to your system. But these are what you need to profile your code.
Also go read http://xdebug.org/docs before you actually use it again so that you know what exactly you are trying to do.
As per another answer on SO, you need to set xdebug.remote_autostart = 0 in your php.ini

How to implement a system wide text replacement in windows programmatically?

I have a small VB .Net application that, among other things, attempts to substitute system wide typed text by the user(hotstrings concept). To achieve that, I have deployed 'ahk2exe' and 'AutoHotkeySC.bin' with my application and did the following:
When a user assignes a new 'hotstring':
Kill 'hotstring' exe script file if running
Append new hotstring to the script file (if non exist then create a new one)
Convert edited/new script file to exe (using ahk2exe)
Run the newly converted script exe
(somewhere there I also check if the hotstring has been already assigned)
However, I am not totally satisfied with this method for the following two main reasons:
The extra resources deployed with the application.
Lag: The time it takes for the system to kill the process and then restart it takes a minimum of 5 seconds on my fast computer and more on other computers. That amount of time is much more than the time it takes the user to assign the hotstring, minimize/close the window and then test his/her new hotstring. When the user does so initially with no success they will think the process failed. So this method is not very good for user experience.
So, I am looking for a different method or implementation. May be using keyboard hooks? Or maybe adding a .dll library that achieves the same. Are there any resources you know about that might help (free or commercial)? What is the best way to achieve my desired goal?
Many thanks for your help.
Implementing what Autohotkey does would be a pretty non trivial task.
But I'm pretty sure that AHK supports an "autoreload" option for scripts
googling "autohotkey auto reload" turned up several pages discussing that very concept. IF that worked, all you'd have to do is update the script file and that's it, AHK should automatically reload the script.

Run application from documents instead of program files

I'm working on creating a self updating application and one issue I'm running into on Vista and Windows 7 is needing to have admin privileges in order to update the client. I've run into issues with clients that have their users running under restricted permissions and they would have to have IT log onto every machine that needed to update the client since the users were not able to.
A possible work around I'm considering is to have the launcher application installed into Program Files as normal, and having the real application that it updates installed in the users documents somewhere, so that they could update and run new versions without IT becoming involved.
I'm wondering what potential gotchas I'm missing here or what I should be aware of before heading down this path. I'm aware that click-once does something very similar, and I'd be using it, except I need the ability to do silent updates, without any user interaction.
This is how it is supposed to be. The last thing most IT departments want is a user randomly updating a piece of software. This could have all sorts of unintentional side effects such as incompatibility with the older version's files, new and possibly insecure functionality, etc. This is why IT departments disable Windows Update and do their updates manually in a controlled fashion.
If the users want an updated version of the software they should be requesting it from their IT department. Those computers and infrastructure don't belong to them, they're simply borrowing time on them from the company they work for so they can do their job.
Is there an issue with having only one installation of your program? Is it particularly large, for example?
Do you require admin privileges to run your program?
If not, odds are you don't need the Program Files folder.
I suggest you forgo installing to Program Files entirely and just install your program into the user's folder system at <userfolder>\AppData\ProgramName.
If you happen to be using .NET, look into the ClickOnce deployment mechanism. It's got a great self-updating feature that'd probably make your life a lot easier.
Edit: Just saw your last sentence. ClickOnce can force the user to update.
A couple of things:
If you decide to move your app to some place in documents, make sure that your application writes data transparently to where your program is installed, e.g. if there are hard coded paths anywhere in the code that are pointing to bad places. Perhaps this is not an issue for you, but might be something to keep in mind.
We solved this in pretty much the same way when we decided to implement a "live update" feature. But instead we installed a service which is running with administrator rights. This service in turn can run installers once the program needs to be updated. With this type of solution you don't even have to move your applicaton out of program files.
Cheers !
Edit:
Another neat thing with having a service running as administrator. Is that you could create a named pipe communication with it and have it do things for you, like you wouldn't be able to do as a normal user.
A loader stub is a good way to go. The only gotcha is when you have to update the loader; the same initial problem applies (though that should be pretty infrequent).
One problem that I can think of off the top of my head is that you're stepping outside the entire idea of keeping things more "secure." Since your executable exists in a location that should be completely accessible to a non-administrator, it's possible that something else could slam your exe thus subverting security.
You can probably leverage AppLocker. It may only be for Win7 though I'm not running Vista any more. ;)