I am running Oracle expdp process from a remote client using the command line. I can see the process starts and there are few progress messages shown in the command prompt window. But, after some time there are no further progress messages. When I checked the log created in the server directory, I could see the expdp process is completed and the .dmp file generated. What could be the reason for the command prompt on the client for not receiving the process updates after some time.
Below is expdp sample command used.
expdp Schemas=XXXX directory=exportdir dumpfile =xxxx.dmp logfile=xxxx.log Job_name=xxxx
Related
I am trying to schedule a bcp job in server 2012 task scheduler. My batch file works fine when I double-click on it. It includes this line:
bcp "SELECT * FROM [TIME_KEEPER]" queryout D:\DATA\TIMESHEET_DBASE.csv -S 10.0.0.54 /c /t, -T
The file is created from the command line. Scheduler has:
Action: start a program
Script: D:\DATA\myBatch.bat
Start in: D:\Data
I am using the same account for other scheduled tasks and they are running fine.
Sounds like a security issue.
Do any of the other scheduled tasks use the bcp executable and connect to the same server pulling data from teh same table? If not then you have to track down the security being used.
When you double click your batch, it is run as the account you are logged in as. Is it possible that your scheduled tasks are running as a different account than what you are logged in as?
As a test, are you able to log in to the windows server using the same account the task scheduler is executing the tasks (assuming they are different)?
Should get similar error at that point.
Just a start.
I am trying to run a .sql script on a schedule. I have created a batch file to run the script. The script runs fine in sql server management studio and also when I run the batch file content through cmd.
Contents of the batch file:
sqlcmd -S omfmesql -U OMESRV -P orat -i "\\pvsrv-
fsr14\data\Projects\Stat_Table_Creation_unique.sql"
The sql script is supposed to update a stat table. When I run it though cmd and refresh the stat table, the numbers are updated. But when I run this batch file through Task Scheduler, the only action that seems to be performed is running C:\Windows\SYSTEM32\cmd.exe
The task is stated to be completed successfully but the sql query is just not run.
I am not too experienced with Task Scheduler. Any help here would be very much appreciated. Thanks!
Note: I am not intending to use SQL Server Agent
If you have not done so, you need to set the location in Task Scheduler (TS). In at least some versions of TS, this can only be done when you create a basic task, not from the more general "Create Task..." option. Ensure that all the paths in the batch file are absolute or are based in this location.
Jenkins version: 1.574
I created a simple job which performs the following:
Using "Execute shell script on remote host using SSH" as one of the BUILD steps, I'm just calling a shell script. This shell script performs stop and start operations on Tomcat to restart an application on the target machine.
I have a valid username, password, port defined for the target SSH server in Jenkins Global settings.
I saw this behavior that when I run a Jenkins job and call the restart script (which gets the application name as parameter $1), it works fine, but as soon as "Execute shell script on remote host using SSH" step completes, I see the new process dies on the remote/target application server.
If I run the script from the target/remote server itself, everything works fine and the new process/PID remains live forever, but running the same script from Jenkins, though I don't see any errors and everything works as expected, the new process dies as soon as the above mentioned SSH step is complete and control comes back to the next BUILD step in Jenkins job OR the Jenkins job is complete.
I saw a few posts/blogs and tried setting: BUILD_ID=dontKillMe in the Jenkins job (in various places i.e. Prepare Environment variables and also using Inject Environment variables...). When the job's particular build# is complete, I can see Environment Variables for that build# does say BUILD_ID=dontKillMe as its value (instead of the default Timestamp tag value).
I tried putting nohup before calling the restart script, i.e.,
nohup restart_tomcat.sh "${app}"
I also tried:
BUILD_ID=dontKillMe nohup restart_tomcat.sh "${app}"
This doesn't give any error and creates a nohup.out file on the remote server (but I'm not worried about it as the restart_tomcat.sh script itself creates its own LOG file which I'm "cat"ing after the restart_tomcat.sh script is complete. cat'ing on the log file is performed using another "Execute shell script on remote host using SSH" build step, and it successfully shows the log file created by the restart script).
I don't know what I'm missing at this point, but as soon as the restart_tomcat.sh step is complete, the new PID/process on the remote/target server dies.
How can I fix this?
I've been through this myself.
On my first iteration, before I knew about Jenkins ProcessTreeKiller, I ended up just daemonizing Tomcat. The Apache Tomcat documentation includes a section on running as a daemon.
You can also try disabling the ProcessTreeKiller for your whole Jenkins instance, if it's relatively small (read the first link for information).
The BUILD_ID=dontKillMe should be passed to the shell, and therefore it should be in your command line, not in Jenkins global configuration or job parameters.
BUILD_ID=dontKillMe restart_tomcat.sh "${app}" should have worked without problems.
You can also try nohup restart_tomcat.sh "${app}" & with the & at the end.
My solution (it worked after trying everything else) in Ubuntu 14.04 (Trusty Tahr) (Amazon AWS - Amazon EC2), Jenkins 1.601:
Exec command: (setsid COMMAND < /dev/null > /dev/null 2>&1 &);
Exec in PTY: DISABLED
// Example COMMAND=socat TCP4-LISTEN:1337,fork TCP4:127.0.0.1:1338
I created this Transfer as my last one.
#!/bin/ksh
export BUILD_ID=dontKillMe
I added the above line to the start of my script and the issue was resolved.
I am calling a perl script in build machine 1 to connect to build machine 2 and call a perl script in build machine 2. The module I am using is Net::Telnet.
Recently I upgraded my bitkeeper in Build machine 2. Since then I am getting the license agreement form of Bitkeeper in the background. So my script is as good as paused till I kill the prompt's process from task manager.
If I kill the process, the bitkeeper clone command will fail and hence my entire build will fail. I am not able to bring this sneaky bkgui.exe process to front and accept the license agreement once and for all.
Can you please help me in solving this problem?
Observations:
I am not getting the license error when I open a command prompt in build machine 2 and call the same script which was called from telnet.
I ran 'whoami' command in my script running in build machine 2 and found it to be administrator.
'C:\WINDOWS\system32\tlntsvr.exe' is running and the USER is 'NT AUTHORITY\SYSTEM'.
When I call telnet from command line of buildmachine 1 and call the script in buildmachine 2, even then the bk command get executed successfully.
I want to run my bitkeeper command in build machine 2 from build machine 1.
You can try the bk legal -pT command. See bk help legal for usage.
I have a complex Powershell script that gets run as part of a SQL 2005 Server Agent Job. The script works fine, but it uses the "Start-Transcript $strLogfile -Append" command to log all of it's actions to a transcript file. The problem is that the transcript is always empty. It adds the header and footer to indicate that the transcript is starting and stopping, but it doesn't actually log anything. Example:
**********************
Windows PowerShell Transcript Start
Start time: 20100304173001
Username : xxxxxxxxxxxx\SYSTEM
Machine : xxxxx-xxx (Microsoft Windows NT 5.2.3790 Service Pack 2)
**********************
**********************
Windows PowerShell Transcript End
End time: 20100304173118
**********************
When I execute the script from a command prompt or start -> run everything works just fine. Here is the command used to run the script (same command used in the Operating system CmdExec step of the SQL Agent Job)
powershell.exe -File "c:\temp\Backup\backup script.ps1"
I first thought it must have something to do with the script running under the System account (default SQL Agent account), but even when I tried changing the SQL Agent to run under my own personal account it still created a blank transcript.
Is there any way to get PowerShell Transcripts to work when executing them as part of a 2005 SQL Server Agent Job?
If your script uses native commands (console exes), Start-Transript does not log any of that output. This issue has been logged on Connect, you can vote on it. One way to capture all input is to use cmd.exe:
cmd /c powershell.exe -file "C:\temp\backup script.ps1" > backup.log
sqlps.exe does not implement certain methods including the method that supports write-host. This may explain why you are not seeing output using Start-Transcript when running sqlps.exe from a SQL Agent Powershell jobstep. See http://blogs.msdn.com/mwories/archive/2009/09/30/the-use-of-write-host-and-sql-server-agent-powershell-job-steps.aspx for more information.
I am still not sure why the Powershell Transcript is empty, but we found a workaround. Under the CmdExec step of the SQL Job there is an advance option to capture the output to a file, which combined with the "Append output to existing file" option and using a Logfile.rtf extension is about the same as the Powershell transcript. This way anything that gets printed to the host from the Powershell script (including native console executables piped to "| out-host") will be captured in the log file.