I used gem5 simulator to run my program,
it generate stats.txt in m5out directory,
but in the stats.txt, I don't know how to calculate the cache access time.
Related
I thought ColabPro+ would allow me to run on a GPU for longer than 24hr, but the VM is getting killed right at 24hrs, while the (Chrome) browser is open, and a python program has been running the whole time using the GPU. I tried running in background mode and in this case it killed the VM in about an hour. What am I missing?
first time working with gem5,according to gem5.org the following build command should take about 15 minutes or so to complete, build/X86/gem5.opt -j9. But its been more than an hour since the build started and its not complete yet, has anyone experienced the same issue? is it normal? My machine has 8 cores and I've allocated 16 gigs of memory for VMware on which am running the build. Can it be a hardware problem such as not enough memory?
So far I have started the build process from scratch a few times with the same results, I've also tried it on a different virtualization platform (Virtualbox) but Its taking the same amount of time to build.
Thanks!
I'm just getting back into trying some front end projects for the first time in a few years. Many npm-based javascript projects I try out end up taking a long time to start up in development mode even for Hello World-ish examples. In particular I'm trying out Nuxt.js.
Dev server startup takes about 100 seconds, and nothing seems to get cached so restarts (not hot reloads) take the same amount of time. My research into the project and known npm issues did not turn up any definitive root cause or ways to improve this yet.
I'm using emacs 26.1 in terminal mode on a 2018 13" MacBook Pro with a core i5, 8 GB of ram, and an SSD.
When I run npm run dev to startup the nuxt dev server I get repeated error in process filter: Args out of range: "\342", -1 errors related to some unusual characters they are using to try to make the output pretty. If I try the same thing in a vanilla Mac OS terminal the server startup goes 10x faster. Why do those errors occur, and why is it so much slower in an emacs terminal?
It turns out the repeated error in process filter issue may be caused by a bug in term mode that was recently fixed but might still be an issue in my version of emacs.
As a workaround, the following can get the nuxt dev server running in ~10s instead of ~100s in an emacs terminal on my mac by filtering out the repeated lines about the modules being built:
$ npm run dev | grep -v modules
Note that I tried using npm's options to adjust the log level but none seem to filter this output. If anyone knows a more "official" way of filtering this, or even better, if you know how to make it such that it doesn't try to rebuild the modules on every dev server start up, I'd be interested to know.
Edit: it might make sense to adjust the dev script command in the package.json file to include the grep filter, that way you can still just type npm run dev and get the workaround.
An average npm install seems to take around 44 seconds on my machine for a new Angular project, when created using the Angular CLI.
I looked at the usage of the computer resources but I didn't see anything being used at 100% (CPU, RAM, Disk, Ethernet).
Is the install time that 'slow' due to the response times of the requests made during the process, or the speed of the server that feeds me the node modules or is there a specific hardware component that is slowing down the process?
Basically, I want to know if upgrading something on my computer could decrease the install time.
I'm having a problem with TensorFlow (CPU) on Ubuntu 14.04 (VM, droplet), where running a script is fast the first time, but when running the same (or another) script directly after completion of the first run, things become very slow.
I'm talking minutes instead of seconds. Even simple test scripts (like those provided in the tutorial) take forever, with no visible CPU load.
For comparison: first run of the test script from the tutorial gives:
{real:0m0.790s, user:0m0.688s, sys:0m0.111s}
Second run of the same script, directly after completion of the first run gives:
{real: 2m46.628s, user: 0m0.783s, sys: 0m0.104s}
Eventually, things seem to clear up and performance is back (only for one run though).
I narrowed the problem down to this:
sess=tf.Session()
takes very long. Apparently resources used by a previous Session are not properly released [?]. My scripts use the Context manager, like
with tf.Session() as sess:
sess.run(...)
My latest hypothesis is that this has to do with system properties (virtual machine settings, hypervisor issues interacting with the context manager of TF). Using the docker container of TF makes no difference. Rebooting didn't help either. The same scripts run OK on OS X.
To make sure it's obvious what happened and that this question is answered: This occurred because tensorflow was reading from /dev/random instead of /dev/urandom. On some systems, /dev/random can exhaust its supply of randomness and block until more is available, causing the slowdown. This has now been fixed in github. The fixes are included in release 0.6.0 and later.