I'm running the command:
gclient sync --no-history -j1
However, nothing is happening. Normally the Chromium repositories would sync, but nothing is happening. I have a .gclient file. Any suggestions?
This command doesn't print information about progress first ~20-30 minutes. Just wait for a while and you will see sync progress. In my environment it takes ~30 minutes until I see progress in terminal.
Related
I ran tests many times in headless mode using robot framework and selenium on my Ubuntu server. But even though the tests are over, my ram usage has increased a lot. I got suspicious and downloaded htop and looked. Are these resource usages normal? Or are they still running in the background? What should I do to get my ram usage back to normal? I have shared the image link below.
My guess-> robot framework gives report.html and log files. So as far as I understand, the robot ends, but the chromium continues.
IMAGE LINK
Did you put a driver.quit() in your code? If not, the driver stay alive. You need to kill the task manually, or write a bash file to do it. I wrote a batch file for Windows, to kill all chrome.exe and chromedriver.exe when I'm testing my bot, you might be able to do something similar in bash:
#echo off
TASKKILL -F -IM chrome.exe
TASKKILL -F -IM chromedriver.exe
echo "####################################"
echo "# DRIVER KILLED SUCCESSFULLY #"
echo "####################################"
In this case, #echo off disable the print of the commands in the console, and the echo are not necessary.
I have a unique problem using jmeter SSH command.
I use this step to run spark jobs.
the problem is that one of the commands not working, to clarify it connects and not get response and just wait and wait for hours, and nothing displayed on screen.
I know how to work with the tool, and this behavior is special for this script alone.
All other script worked, I duplicate one that worked for example
sudo /run_stg.sh this command worked
sudo /run_off2-stg.sh this command not worked
if I run the job manually via jenkins it worked
if I entered to command line and use plik ssh it worked,
the problem is just Jmeter, that is waiting and waiting and I can not understand for what?
the job is about 3 minutes, and I wait for response in Jmeter for 4 hours and nothing Jmeter just waiting.
in the console log I set to trace level and nothing, absolutely no idea how to start handle this issue in Jmeter.
an anyone please assists how to make Jmeter to write what happened?
or just to know if he connect or anything
since this behavior all the test can not be performed
Most probably you are as usual misconfiguring the SSH Command sampler.
The idea is not to run the script per se, you need to delegate the script execution to the Unix Shell, for example Bash this way you will be able to combine several commands together, see the output, amend debugging level, etc.
So I would recommend setting your command to something like /bin/bash -c -x /your/script.sh
Another guess, given you use sudo it might be the case that the sudo command simply waits for the password (which JMeter never provides), if this is the case try amending your script permissions using chmod command and allowing your user its execution without root privileges.
And finally, given you're able to run your command using "plik ssh" (whatever it is) you can run it using OS Process Sampler
More information: How to Run External Commands and Programs Locally and Remotely from JMeter
I am building WebRTC library using travis CI.
This is running well but takes lots of time and more and more often the build ends with the message :
The job exceeded the maximum time limit for jobs, and has been
terminated.
You can consult a log that failed travis log
During the gclient sync :
_______ running 'download_from_google_storage --directory --recursive --num_threads=10 --no_auth --quiet --bucket chromium-webrtc-resources src/resources' in '/home/travis/build/mpromonet/webrtc-streamer/webrtc'
...
Hook 'download_from_google_storage --directory --recursive --num_threads=10 --no_auth --quiet --bucket chromium-webrtc-resources src/resources' took 1255.11 secs
I disabled the tests, so I think this is useless and it takes lots of time.
Is there anyway to give some arguments or setting some variables to avoid this time costly task ?
A way to not download chromium-webrtc-resources defined in dependencies DEPS
{
# Download test resources, i.e. video and audio files from Google Storage.
'pattern': '.',
'action': ['download_from_google_storage',
'--directory',
'--recursive',
'--num_threads=10',
'--no_auth',
'--quiet',
'--bucket', 'chromium-webrtc-resources',
'src/resources'],
},
is to pached it removing this section or adding a condition that is false.
In order to patch I used the folowing command :
sed -i -e "s|'src/resources'],|'src/resources'],'condition':'rtc_include_tests==true',|" src/DEPS
This save about 20mn and allow the travis build to stay below the timeout.
You can bake the entire toolchain into a docker image and run your actual tests/builds in that. Delegate the docker image update into another automated process (travis-ci cronjob for example).
An additional benefit is that you now have full control over when parts of your toolchain change. I find that very important.
Edit:
Some resources to read.
The official travis docs for using docker
Building & deploying images on travis
Dockerhub automated builds
We're running tests using karma and phantomjs Last week, our tests mysteriously started crashing phantomJS with an error of -1073741819.
Based on this thread for Chutzpah it appears that code indicates a native memory failure with PhantomJS.
Upon further investigation, we are consistently seeing phantom crash around 750MB of memory.
Is there a way to configure Karma so that it does not run up against this limit? Or a way to tell it to flush phantom?
We only have around 1200 tests so far. We're about 1/4 of the way through our project, so 5000 UI tests doesn't seem out of the question.
Thanks to the StackOverflow phenomenon of posting a question and quickly discovering an answer, we solved this by adding gulp tasks. Before we were just running karma start at the command line. This spun up a single instance of phantomjs that crashed when 750MB was reached.
Now we have a gulp command for each one of our sections of tests, e.g. gulp common-tests and gulp admin-tests and gulp customer-tests
Then a single gulp karma that runs each of those groupings. This allows each gulp command to have its own instance of phantom, and therefore stay underneath that threshold.
We ran into similar issue. Your approach is interesting and certainly side steps the issue. However, be prepared to face it again later.
I've done some investigation and found the cause of memory growth (at least in our case). Turns out when you use:
beforeEach(inject(SomeActualService)){ .... }
the memory taken up by SomeActualService does not get released at the end of the describe block and if you have multiple test files where you inject the same service (or other injectable objects) more memory will be allocated for it again.
I have a couple of ideas on how to avoid this:
1. create mock objects and never use inject to get real objects unless you are in the test that tests that module. This will require writing tons of extra code.
2. Create your own tracker (for tests only) for injectable objects. That way they can be loaded only once and reused between test files.
Forgot to mention: We are using angular 1.3.2, Jasmine 2.0 and hit this problem around 1000 tests.
I was also running into this issue after about 1037 tests on Windows 10 with PhantomJS 1.9.18.
It would appear as ERROR [launcher]: PhantomJS crashed. after the RAM for the process would exceed about 800-850 MB.
There appears to be a temporary fix here:
https://github.com/gskachkov/karma-phantomjs2-launcher
https://www.npmjs.com/package/karma-phantomjs2-launcher
You install it via npm install karma-phantomjs2-launcher --save-dev
But then need to use it in karma.conf.js via
config.set({
browsers: ['PhantomJS2'],
...
});
This seems to run the same set of tests while only using between 250-550 MB RAM and without crashing.
Note that this fix works out of the box on Windows and OS X, but not Linux (PhantomJS2 binaries won't start). This affects pushes to Travis CI.
To work around this issue on Debian/Ubuntu:
sudo apt-get install libicu52 libjpeg8 libfontconfig libwebp5
This is a problem with PhantomJS. According to another source, PhantomJS only runs the garbage collector when the page is closed, and this only happens after your tests run. Other browsers work fine because their garbage collectors work as expected.
After spending a few days on the issue, we concluded that the best solution was to split tests into groups. We had grunt create a profile for each directory dynamically and created a command that runs all those profiles. For all intents and purposes, it works just the same.
We had a similar issue on linux (ubuntu), that turned out to be the amount of memory segments that the process can manage:
$ cat /proc/sys/vm/max_map_count
65530
Then run this:
$ sudo bash -c 'echo 6553000 > /proc/sys/vm/max_map_count'
Note the number was multiplied by 100.
This will change the session settings. If it solves the problem, you can set it up for all future sessions:
$ sudo bash -c 'echo vm.max_map_count = 6553000 > /etc/sysctl.d/60-max_map_count.conf'
Responding to an old question, but hopefully this helps ...
I have a build process which a CI job runs in a command line only linux box. So, it seems that PhantomJS is my only option there. I have experienced this memory issue locally on my mac, but somehow it doesn't happen on the linux box. My solution was to add another test command to my package.json to run karma using Chrome, and run that locally to run my tests. When pushed up, Jenkins would kick off the regular test command, running PhantomJS.
Install this plugin: https://github.com/karma-runner/karma-chrome-launcher
Add this to package.json
"test": "karma start",
"test:chrome": "karma start --browsers Chrome"
TortoiseSVN hangs (freezes) on "Sending content" when I use a post-commit hook on my VisualSVN repository. The following is the hook:
cd C:\Sysinternals\
PsExec \\\OtherComputer TortoiseProc /command:update /path:"C:\MyPath\" /closeonend:4
The content is sent, but a local update is required or it is marked as out of date. Any ideas?
The hook script has to finish first to make the commit succeed. So the client has to wait for that. If your hook script takes too long or doesn't finish at all, then the commit appears to hang.
You can try to start the long-running command in your hook script in a separate process so that the hook script itself finishes immediately.
However: if OtherComputer is the computer you're trying to commit from and the script tries to update the very same working copy, then that won't help either: the update has to wait until the commit is finished, but the commit waits for the hook script running the update to finish - you've got a deadlock.
This looks like a local hook. I don't think you can use PsExec like that. I think you're opening the PsExec session on the other computer, and it just sits there. It doesn't have a way to see the next line in the script. i.e. the TortoiseProc isn't fed into the PsExec.
I think you need to install the SVN client (command-line client) on the other machine. Then make a bat file (updateme.bat), place it on that machine, then you can do something like this (all one line):
c:\sysinternals\PsExec \\OtherComputer c:\updateme.bat