TortoiseSVN hangs (freezes) on "Sending content" when I use a post-commit hook on my VisualSVN repository. The following is the hook:
cd C:\Sysinternals\
PsExec \\\OtherComputer TortoiseProc /command:update /path:"C:\MyPath\" /closeonend:4
The content is sent, but a local update is required or it is marked as out of date. Any ideas?
The hook script has to finish first to make the commit succeed. So the client has to wait for that. If your hook script takes too long or doesn't finish at all, then the commit appears to hang.
You can try to start the long-running command in your hook script in a separate process so that the hook script itself finishes immediately.
However: if OtherComputer is the computer you're trying to commit from and the script tries to update the very same working copy, then that won't help either: the update has to wait until the commit is finished, but the commit waits for the hook script running the update to finish - you've got a deadlock.
This looks like a local hook. I don't think you can use PsExec like that. I think you're opening the PsExec session on the other computer, and it just sits there. It doesn't have a way to see the next line in the script. i.e. the TortoiseProc isn't fed into the PsExec.
I think you need to install the SVN client (command-line client) on the other machine. Then make a bat file (updateme.bat), place it on that machine, then you can do something like this (all one line):
c:\sysinternals\PsExec \\OtherComputer c:\updateme.bat
Related
I have a unique problem using jmeter SSH command.
I use this step to run spark jobs.
the problem is that one of the commands not working, to clarify it connects and not get response and just wait and wait for hours, and nothing displayed on screen.
I know how to work with the tool, and this behavior is special for this script alone.
All other script worked, I duplicate one that worked for example
sudo /run_stg.sh this command worked
sudo /run_off2-stg.sh this command not worked
if I run the job manually via jenkins it worked
if I entered to command line and use plik ssh it worked,
the problem is just Jmeter, that is waiting and waiting and I can not understand for what?
the job is about 3 minutes, and I wait for response in Jmeter for 4 hours and nothing Jmeter just waiting.
in the console log I set to trace level and nothing, absolutely no idea how to start handle this issue in Jmeter.
an anyone please assists how to make Jmeter to write what happened?
or just to know if he connect or anything
since this behavior all the test can not be performed
Most probably you are as usual misconfiguring the SSH Command sampler.
The idea is not to run the script per se, you need to delegate the script execution to the Unix Shell, for example Bash this way you will be able to combine several commands together, see the output, amend debugging level, etc.
So I would recommend setting your command to something like /bin/bash -c -x /your/script.sh
Another guess, given you use sudo it might be the case that the sudo command simply waits for the password (which JMeter never provides), if this is the case try amending your script permissions using chmod command and allowing your user its execution without root privileges.
And finally, given you're able to run your command using "plik ssh" (whatever it is) you can run it using OS Process Sampler
More information: How to Run External Commands and Programs Locally and Remotely from JMeter
I have a self hosted asp.net core app deployed and in use in an enterprise environment on Windows Server 2012.
I am looking for a way to automate the update process, I am currently doing this through a bat file but keep getting windows file lock errors where the file cannot be deleted. The process I am following in the bat file is as follows:
kill the dotnet core process for the web app
clear the directory (after sleep for a couple of seconds)
copy the updates over
restart the web app
I am getting the errors in 2 where I try to clear out the existing directory which still has file locks even though I have killed the process - "Cannot delete output file - access is denied".
My question is how can I upgrade the self contained asp.net core web app in place and avoid the file locks? If the site is offline for a few seconds it is not an issue.
Thanks
There a several reasons i can think of that deleting the directory gives access denied errors.
Your process isn't actually stopped yet. I know you can use powershell to await until porcess is stopped. (or check if process is stopped yet and otherwise wait 3 more seconds)
Another process still runs in this folder. (maybe even a command line, or explorer.exe is opened in the folder.)
You need admin rights to delete this folder.
The bat file you are executing executes from this directory, and itself is locking the directory.
Try one of the following:
powershell Stop-Service.
It should wait until service is really stopped.
powershell Wait-Process Waits untill process is stopped. you can call this directly after Stop-Process
Try to run powershell to wait like this for example (in commandline):
powershell -Command "Wait-Process -Name MyProcess"`
(warning you might run into ExecutionPolicy problems)
Tip
Use msdeploy, you can remote execute commands and deploy your application.
You can use pre and post scripts (to stop and start the app) and msdeploy it self will sync the folder/directory for you.
Jenkins version: 1.574
I created a simple job which performs the following:
Using "Execute shell script on remote host using SSH" as one of the BUILD steps, I'm just calling a shell script. This shell script performs stop and start operations on Tomcat to restart an application on the target machine.
I have a valid username, password, port defined for the target SSH server in Jenkins Global settings.
I saw this behavior that when I run a Jenkins job and call the restart script (which gets the application name as parameter $1), it works fine, but as soon as "Execute shell script on remote host using SSH" step completes, I see the new process dies on the remote/target application server.
If I run the script from the target/remote server itself, everything works fine and the new process/PID remains live forever, but running the same script from Jenkins, though I don't see any errors and everything works as expected, the new process dies as soon as the above mentioned SSH step is complete and control comes back to the next BUILD step in Jenkins job OR the Jenkins job is complete.
I saw a few posts/blogs and tried setting: BUILD_ID=dontKillMe in the Jenkins job (in various places i.e. Prepare Environment variables and also using Inject Environment variables...). When the job's particular build# is complete, I can see Environment Variables for that build# does say BUILD_ID=dontKillMe as its value (instead of the default Timestamp tag value).
I tried putting nohup before calling the restart script, i.e.,
nohup restart_tomcat.sh "${app}"
I also tried:
BUILD_ID=dontKillMe nohup restart_tomcat.sh "${app}"
This doesn't give any error and creates a nohup.out file on the remote server (but I'm not worried about it as the restart_tomcat.sh script itself creates its own LOG file which I'm "cat"ing after the restart_tomcat.sh script is complete. cat'ing on the log file is performed using another "Execute shell script on remote host using SSH" build step, and it successfully shows the log file created by the restart script).
I don't know what I'm missing at this point, but as soon as the restart_tomcat.sh step is complete, the new PID/process on the remote/target server dies.
How can I fix this?
I've been through this myself.
On my first iteration, before I knew about Jenkins ProcessTreeKiller, I ended up just daemonizing Tomcat. The Apache Tomcat documentation includes a section on running as a daemon.
You can also try disabling the ProcessTreeKiller for your whole Jenkins instance, if it's relatively small (read the first link for information).
The BUILD_ID=dontKillMe should be passed to the shell, and therefore it should be in your command line, not in Jenkins global configuration or job parameters.
BUILD_ID=dontKillMe restart_tomcat.sh "${app}" should have worked without problems.
You can also try nohup restart_tomcat.sh "${app}" & with the & at the end.
My solution (it worked after trying everything else) in Ubuntu 14.04 (Trusty Tahr) (Amazon AWS - Amazon EC2), Jenkins 1.601:
Exec command: (setsid COMMAND < /dev/null > /dev/null 2>&1 &);
Exec in PTY: DISABLED
// Example COMMAND=socat TCP4-LISTEN:1337,fork TCP4:127.0.0.1:1338
I created this Transfer as my last one.
#!/bin/ksh
export BUILD_ID=dontKillMe
I added the above line to the start of my script and the issue was resolved.
Hello people.
I'm using Jenkins as CI server and I need to run some performance test using Jmeter. I've setup the plugin and configured my workspace and everything works ok, but I have to do some steps manually and I want a bit more of "automation".
Currently i have some small programs in a remote server. These programs make some specific validations, for instance (just to explain): validates e-mail addresses, phone numbers, etc.
So, before I run the build in jenkins, I have to manually start the program (file.sh) I want:
I have to use putty (or any othe ssh client) to conect to the server and then run, for instance, the command
./email_validation.sh
And the Jmeter test runs in a correct way, and when the test is done I have to manually "shut down" the program I started. But what I want is trying to start the program I need in Jenkins configuration (not manually outside Jenkins, but in "execute shell" or "execute remote shell using ssh" build step).
I have tried to start it, but it get stuck, because when Jenkins build finds the command
./email_validation.sh
the build stops, it waits for the command to finish and then it will continue the other build steps, but obviously, I need this step not to finish until the test is executed.
Is there a way to achieve this? Thanks
Run your command as a background process by adding the & symbol at the end of the command and use the nohup command in case the parent process gets a hangup signal, e.g.
nohup /path/to/email_validation.sh &
If the script produces any output, it will go by default to the file nohup.out in the current directory when the script was launched.
You can kill the process at the end of the build by running:
pkill email_validation.sh
My trunk has structure:
\trunk
----\data
----\src
----\tool
with \tool is external to another place, not in my trunk. So i don't want user commit to \tool in SVN. They can only commit to \data or \src.
Can anybody help me to create a hook script to prevent user commit to external (in this case is \tool folder)?
I'm not very familiar with externals, but if you do want to create a pre-commit hook script, it's pretty easy code but tricky to debug.
Your pre-commit hook takes in two parameters, $ARGV[0] = repository path, $ARGV[1] = transaction being committed.
Your hook script would use svnlook, something like
svnlook dirs-changed -r $ARGV[0] -t $ARGV[1]
And return a (negative?) exit status if svnlook returned that tools (or anything starting with tools) changed.
Anything you print to STDERR is displayed to client as the error message.
You would place this script in your repository under hooks, name it "pre-commit", make it executable
Be sure to check the svnlook documentation as I'm going on memory here