I have a master process which forks & execs child processes with defined executable files.
Master and child processes are running infinite loops. Master restarts them if they stop.
Now I want to update the executable files of child processes. Something like:
1. copy new executable files
2. kill child processes (when they're idle)
3. master process restarts them with new executables
But simple cp "new_exec" "old_exec" returns error:
"Cannot open or remove a file containing a running program."
Question(s):
Can I use "cp -f new_file old_file"?
Would that affect old running processes (Is everything loaded into memory on process start)?
I am on AIX.
Can I use "cp -f new_file old_file"?
Would that affect old running processes (Is everything loaded into memory on process start)?
'No' and 'yes'. But, if you had asked how to solve this, I would say either:
rm /oldpath/exec
cp /newpath/exec /oldpath/exec
or rather:
cp /newpath/exec /oldpath/exec_replace
mv /oldpath/exec_replace /oldpath/exec
Related
I installed slurm on a workstation and it seemed to work, i can use the slurm commands, srun is working too.
But when i try to launch a job from a script using sbatch test.sh i get the following error : Batch job submission failed: I/O error writing script/environment to file even if the script is the simplest like
#!/bin/bash
srun hostname
Make sure slurmd is running as root. See the SlurmdUser parameter in slurm.conf. Its default value is root and it should be so.
Note this is different from the SlurmUser parameter, that defines the user which runs the controller processes ; this one is preferably not root.
If the configuration is correct, then you might have a faulty filesystem at the location referred to in the SlurmdSpoolDir parameter, where slurmd writes the submission script and environment for jobs assigned to the node.
I want to extract execution traces (e.g., visited basic blocks) when testing Apache server (httpd). Since my work is based on LLVM infrastructure, I choose to use clang instrumentation based profiling as follows:
clang -fprofile-instr-generate ${options to compile httpd} -o httpd
export LLVM_PROFILE_FILE=code-%p.profraw
sudo -E ./httpd -k start # output a .profraw
curl ${url} # send a request
sudo -E ./httpd -k stop # output another .profraw
The compilation of instrumented httpd works well.
However, I want to track httpd's request handling which is executed in a separate child process. The output .profraw does not record any execution from child processes. As a result, I can only access the execution traces of starting and closing the server. How can I get the .profraw including request handling?
Not restricted in clang profiling. Any solution compatible with LLVM is great. Thanks!
Update
From the logs, it turns out the child process whose owner is "daemon" has no write permission to the files
LLVM Profile Error: Failed to write file "code-94752.profraw": Permission denied
Problem solved
The problem is the collision of prof file names. The process httpd -k start create multiple child processes as workers. When using LLVM_PROFILE_FILE=code-%p.profraw, their pid %p is same. So the main process is owned by root and creates the prof file first. Then latter process owned by daemon cannot write that file.
Solution: Use LLVM_PROFILE_FILE=code-%9m.profraw (%Nm instead of %p) to avoid name collisions.
I use the following command to find out if file descriptor is opened:
/usr/sbin/lsof -a -c sqlplus -u ${USER} | grep -l "${FILE_NAME}”
If it is not, I perform some actions. The file is a log spooled from sqlplus.
Sometimes lsof tells that file descriptor is not opened, but then I find some new data in this file. It happens very seldom, so I cannot reproduce it.
What can be the reason?
How does sql spool work?
Does it keep open file descriptor from the SPOOL file command till the SPOOL OFF comand or does it open and close file descriptor several times?
You probably have a "race condition". Sqlplus opened the file, put some new data in it and closed it in between the time lsof checked the file and when you used the result of lsof to process the file.
Often, the best way to avoid race conditions in a file system is to rename the file concerned before processing it. Renaming is a relatively cheap operation and this stops other processes from opening/modifying the file while your process deals with it. You need to make sure that if the file is open in another process when it is renamed that you wait until it is no longer being accessed via the open file handle before your process deals with it.
Most programmers write code that is littered with race conditions. These cause all sorts of unreproducible bugs. You'll be a much better programmer if you keep in mind that almost all programs have multiple processes sharing resources and that sharing must always be managed.
I'm accessing a webserver via PHP. I want to update some info in the Apache configs, so I start a shell script that makes the changes. Then I want to stop and restart Apache.
Problem: as soon as I stop Apache, my process stops and my shell script, being a child process, is killed. Apache never restarts. This also happens with Apache restart.
Is there a way to fork an independent, non-child process for the shell script, so I can restart Apache?
Thx,
Mr B
You can use disown:
disown [-ar] [-h] [jobspec ...]
Without options, each jobspec is removed from the table of active jobs. If the `-h' option is given, the job is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP. If jobspec is not present, and neither the `-a' nor `-r' option is supplied, the current job is used. If no jobspec is supplied, the `-a' option means to remove or mark all jobs; the `-r' option without a jobspec argument restricts operation to running jobs.
./myscript.sh &
disown
./myscript.sh will continue running even if the script that started it dies.
Take a look at nohup, may fit you needs.
let's say you have a script called test.sh
for i in $(seq 100); do
echo $i >> test.temp
sleep 1;
done
if you run nohup ./test.sh & you can kill the shell and the process stay alive.
hi i am new to valgrind. I know how to run valgrind on executable files from command line. But how do you run valgrind on server processes like apache/myqld/traffic server etc ..
I want to run valgrind on traffic server (http://incubator.apache.org/projects/trafficserver.html) to detect some memory leaks taking place in the plugin I have written.
Any suggestions ?
thanks,
pigol
You have to start the server under Valgrind's control. Simply take the server's normal start command, and prepend it with valgrind.
Valgrind will attach to every process your main "server" process spawns. When each thread or process ends, Valgrind will output its analysis, so I'd recommend piping that to a file (not sure if it comes out on stderr or stdout.)
If your usual start command is /usr/local/mysql/bin/mysqld, start the server instead with valgrind /usr/local/mysql/bin/mysqld.
If you usually start the service with a script (like /etc/init.d/mysql start) you'll probably need to look inside the script for the actual command the script executes, and run that instead of the script.
Don't forget to pass the --leak-check=full option to valgrind to get the memory leak report.