Valgrind has long pause before running executables - valgrind

Let me preface this question by saying that I know it takes programs longer to run in valgrind as there is a lot of overhead. This question is not about that.
To ensure that our implementations of data structures have the appropriate runtime, all test cases time out after a certain amount of time (usually around 10 times the amount of time the teacher produced solutions take to run in Valgrind). I ran the test cases on my laptop early in the day and everything was fine. I made two very minor changes later at night (adding one to something and adding a counter for something else, both of which are constant time operations). I reran the tests and I timed out on even the most basic of test cases, like inserting one node. I was freaking out, so I went to the 24/7 computer lab on campus and ran my code on a virtual machine and it worked fine. I ran the binaries on my laptop and they're speedy. I tried turning my computer off and then back on and that didn't fix anything, so I tried updating valgrind but it is up to date. I removed valgrind and then re-installed and that didn't fix the problem either. To verify it is a problem with valgrind and not my code I made a hello_world.cpp then and ran the binary in valgrind with no extra flags. It takes about 15-20 seconds to run. I have absolutely no idea why this is happening. I've not made any changes to my computer. I've skimmed the valgrind documentation, but I cannot pin down what is wrong. I run Fedora 27.

Related

Does running a code multiple time use a lot of memory?

I have just gotten into programming and i realized for every variable that you identify, you use ur computer memory as it is saved in it somewhere.
I wanted to know if I run a piece of code multiple times, would I lose more memory or somehow once you close terminal or program, system deletes it automatically.
THANK YOU
I've run a code several times and every time the address that a same variable is saved in is different.
I believe I'm wasting my computers memory so if I am how do I delete said variables from memory?
Yes, for all intents and purposes, it is gone the second the program has finished executing.
There are times that this isn't true, but they almost certainly don't apply to you. When on a normal computer or OS running device, the OS (operating system) will clean-up any resources used by your code when it is finished running. This includes all the memory used by declared variables (which is tiny amounts anyway, normally), files you have opened and forgotten to close, and pretty much everything else. OSs are very resilient!
I've run a code several times and every time the address that a same variable is saved in is different. I believe I'm wasting my computers memory so if I am how do I delete said variables from memory?
These are some pretty good investigative skills (a good sign for someone new to programming), but there are different reasons for this, don't worry. Memory addresses are a complex topic that is worth a look at later down the line, but the simplified story is that memory addresses are different every time you run the program for both security and performance reasons.

Faster start up of Jython possible?

At the moment I'm programming with Jython on my Laptop, but want to use it on my Raspberry Pi3 (Raspbian) later.
Well the start-up time of my program on my Laptop is under 2s, but on my Pi3 it's up to 30s.
I know the issues are that Jython needs time to start up the JVM and even a Pi3 is not as fast as my 3 year old Laptop, but Is it maybe still possible to reduce this start-up time anyway (without over-clocking my Pi)?
EDIT:
At the end I want to use my .py scripts with the jython-standalone.jar v2.7.0
Well I found a solution which is pretty ok for my Project: For small projects OpenJDK seems to be ok, but some things WON'T work with Jython. I had a sleepless weekend until I get this idea to look which version of Java I was using. With the oracle Java(newest Version) it seems to be as fast as on my Laptop Windows etc. (yes it is a little bit slower, but a second less or more is not necessary in my case).
If there is still a faster way to startup, surprise me. :)

Intellij IDEA 2016.2 high CPU usage

I have only one project (an ordinary SpringFramework project) opened. And the IDE is crazy using CPU:
JVisualVM CPU sample:
Note this happened just recently
Any idea?
The correct answer was posted by #matt-helliwell if you're coming from a version older than 2016.2.
File -> Invalidate Caches and Restart
If the above doesn't fix your problem, track this issue:
https://youtrack.jetbrains.com/issue/IDEA-157837
I invalidated the caches and it solved the problem for some time. But after a couple days Idea (my version is 2017.1.3) started to work slow with some freezing delays again. Finally I increased maximum available memory to 2 Gb (parameter -Xmx in idea.exe.vmoptions/idea64.exe.vmoptions file) and now it works perfect
I solved the problem by running idea64 bits :
JetBrains\IntelliJ IDEA 2016.2.4\bin\idea64.exe
Another possible solution, my IDEA was very slow because of a huge sql file open which was consuming all my CPU.
It took me a long time to notice this was happening only when opening a specific utility class with more than 1000 lines of code.
This class had maybe 50 public static methods (the reason why it is a utility class...), all pure.
At first, I thought it was stuck on a loop of "Performing code analysis" because that was the thing running heavily on the background as shown when hovering the mouse above a green check on top of the window of the offending class:
, but in reality, it was slowly scanning each instance in which the code was being executed in the entire source code.
It took like 45 minutes to scan the entirety of the class, the entire time the CPU usage at max (100%).
Once the class is closed the usage stops.
The issue (at least with AS Dolphin 2022-23) is that the analysis is never memorized, so if the window is closed, and opened later, the analysis begins from 0. So, it never gets cached...

SMSS.exe set priority or afinity - insane CPU usage

I am having a problem on my Windows 8 64bit (legitimate) computer. I've got all the drivers for my motherboard, and in the last few weeks I have realised that smss.exe is using up to 40% (average of 30%) of my CPU. When it starts doing this, it can cause crazy lag in my games, even though I have a very high-spec PC.
The file is located in system32 and I've ran lots of AV scans (from Microsoft defender and MalwareBytes). In addition to this, I've also scanned for disk errors on all drives, and replaced the smss.exe from a working PC, but the problem still occurs.
A system restore is not an option here.
If there is no solution, is there any possible way to force the priority of the process to low so my games are playable please? At present, the process cannot be terminated, or edited at all - even the affinity.
Couldn't find the solution. After a lot of work, research, repairing Windows files I was lost. I even manually repaired a lot, but my Windows install was 2 years old. The only fix was to back it all up, reset the PC and run all the same programs again, 1 by 1, and no error has occurred. Odd.

Speeding up the Dojo Build

We are running a build of our application using Dojo 1.9 and the build itself is taking an inordinate amount of time to complete. Somewhere along the lines of 10-15 minutes.
Our application is not huge by any means. Maybe 150K LOC. Nothing fancy. Furthermore, when running this build locally using Node, it takes less than a minute.
However, we run the build on a RHEL server with plenty of space and memory, using Rhino. In addition, the tasks are invoked through Ant.
We also use Shrinksafe as the compression mechanism, which could also be the problem. It seems like Shrinksafe is compressing the entire Dojo library (which is enormous) each time the build runs, which seems silly.
Is there anything we can do to speed this up? Or anything we're doing wrong?
Yes, that is inordinate. I have never seen a build take so long, even on an Atom CPU.
In addition to the prior suggestion to use Node.js and not Rhino (by far the biggest killer of build performance), if all of your code has been correctly bundled into layers, you can set optimize to empty string (don’t optimize) and layerOptimize to "closure" (Closure Compiler) in your build profile so only the layers will be run through the optimizer.
Other than that, you should make sure that there isn’t something wrong with the system you are running the build on. (Build files are on NAS with a slow link? Busted CPU fan forcing CPUs to underclock? Ancient CPU with only a single core? Insufficient/bad RAM? Someone else decided to install a TF2 server on it and didn’t tell you?)