use select() to send signal to child process - process

I am trying to send some signals (e.g. SIGSTOP, SIGUSR1) from the parent process based on different user inputs to the child process. The parent process keeps waiting for user input and sends the corresponding signals to the child. If there is no user input, the child does its own job.
I put my Ocaml code here but I am not sure I used the right way to do it.
I am writing in OCaml but solutions in other languages (e.g. C/Python) are also welcome.
let cr, pw = Unix.pipe () in
let pr, cw = Unix.pipe () in
match Unix.fork () with
| 0 -> (* child *)
Unix.close pr;
Unix.close pw;
Unix.dup2 cw Unix.stdout;
Unix.execvp ... (* execute something *)
| pid -> (* parent *)
Unix.close cr;
Unix.close cw;
Unix.dup2 pr Unix.stdin;
while true do
try
match Unix.select [pr] [] [] 0.1 with
| ([], [], []) -> (* no user input *)
(* I assume it should do next iteration and wait for next user input *)
raise Exit
| (_, _, _) -> (* some user input *)
let i = read_int () in
(* send signal to the child process *)
if i = 1 then Unix.kill pid Sys.sigstop
else if i = 2 then Unix.kill pid Sys.sigusr1;
with Exit -> ()
done
Meanwhile, if I would like to define some signals (using SIGUSR1), how and where should I do it?
Thanks!

It's not very clear what you're trying to do. Here are some comments on the code you show.
The parent process seems to be reading a pipe that it has itself created (pr). But you say the parent process is waiting for user input. User input isn't going to show up in a pipe that you create yourself.
Almost always you look for user input by reading the standard input, Unix.stdin.
The code creates another pipe that appears to be intended for the use of the child process, but there are no arrangements for the file descriptors of the pipe to be accessed by the child. The child will instead read the standard input of the parent and write to the parent's standard output.
Your code calls select with a timeout of 0.1 seconds. This means the call will return 10 times per second whether there is any input in the pipe or not. Every time it returns it will write a newline. So the output will be a stream of newlines appearing at the rate of around 10 per second.
You say you want to define signals, but it's not at all clear what this means. If you mean you want to define signal handlers for the child, this isn't possible in the code you show. Handlers of signals are not preserved across a call to Unix.execvp. If you think about it, this is the only way things could work, since the execvp call obliterates all the code in the parent process and replaces it with code from some other executable file.
It is not difficult to fork a child and then send it signals. But it's not clear what you're trying to do with the pipes and with the select. And it's not clear what you expect the signals to do in the child process. If you explain these, it would be easier to give a more detailed answer.
Update
In general, it's not a good idea to modify code that you've posted to StackOverflow (other than to fix typos), because the previous comments and answers no longer make sense with the new code. It's better to post updated code.
Your new code looks more like you're trying to read input from the child through a pipe. This is more sensible, but then it is not what I would call "user input."
You haven't specified where the child's input is supposed to come from, but I suspect you are planning to send input through the other pipe.
If so, this is a notoriously unreliable setup due to buffering between the child process and its pipes. If you haven't written the code for the child process yourself, there's no way to be sure it will read the proper sized data from the read pipe and flush its output to the write pipe at appropriate times. The usual result is that things immediately stall and make no progress.
If you are writing the code for the child process, you need to make sure it reads data in sizes that are being written by the parent. If the child asks for more data than this, it will block. If the parent is waiting for the answer to appear in its read pipe, you will have deadlock (which is the usual result unless you're very careful).
You also need to make sure the child flushes its output any time it is ready to be read by the parent process. And you also need to flush the output of the parent process any time you want it to be read by the child process. (And the parent has to write data in the sizes that are expected by the child.)
You haven't explained what you're trying to do with signals, so I can't help with that. It doesn't make much sense (to me) to read integer values written by the child process and send signals to the child process in response.
Here is some code that works. It doesn't do anything with signals, because I don't understand what you're trying to do. But it sets up the pipes properly and sends some fixed-length lines of text through the child process. The child process just changes all the characters to upper case.
let cr, pw = Unix.pipe ()
let pr, cw = Unix.pipe ()
let () =
match Unix.fork () with
| 0 -> (* Child process *)
Unix.close pr;
Unix.close pw;
Unix.dup2 cr Unix.stdin;
Unix.dup2 cw Unix.stdout;
Unix.close cr;
Unix.close cw;
let rec loop () =
match really_input_string stdin 6 with
| line ->
let ucline = String.uppercase_ascii line in
output_string stdout ucline;
flush stdout;
loop ()
| exception End_of_file -> exit 0
in
loop ()
| pid ->
(* Parent process *)
Unix.close cr;
Unix.close cw;
Unix.dup2 pr Unix.stdin;
Unix.close pr;
List.iter (fun s ->
let slen = String.length s in
ignore (Unix.write_substring pw s 0 slen);
let (rds, _, _) =
Unix.select [Unix.stdin] [] [] (-1.0)
in
if rds <> [] then
match really_input_string stdin 6 with
| line -> output_string stdout line
| exception End_of_file ->
print_endline "unexpected EOF"
)
["line1\n"; "again\n"];
Unix.close pw;
exit 0
When I run it I see this:
$ ocaml unix.cma m.cmo
LINE1
AGAIN
If you change either of the test lines to be shorter than 6 bytes, you'll see the deadlock that usually happens when people try to set up this dual-pipe scheme.
Your code for sending signals looks OK, but I don't know what you expect to happen when you send one.

Related

How to update "variables" in main with IO and GTK [duplicate]

Trying to learn to write applications with Gtk2Hs I'm getting difficulties bridging the gap between the event driven Gtk2HS and the persistent state of my model. So to simplify, lets say that I have this simple application
module Main where
import Graphics.UI.Gtk
import Control.Monad.State
main = do
initGUI
window <- windowNew
button <- buttonNew
set button [buttonLabel := "Press me"]
containerAdd window button
-- Events
onDestroy window mainQuit
onClicked button (putStrLn ---PUT MEANINGFUL CODE HERE---)
widgetShowAll window
mainGUI
and the state of my application is how many times the button has been pressed. Seeing other posts like this they rely on MVars or IORefs which do not seem satisfactory to me, because in the future maybe I will want to refactor the code so the state lives on its own context.
I think that the solution should use the State monad using a step function like:
State $ \s -> ((),s+1)
but I'm not sure about the implications, how to do that in the above code or even if that monad is the right solution to my problem.
There's basically two approaches:
Use a pointer of some kind. This is your IORef or MVar approach. You can hide this behind a MonadState-like interface if you like:
newtype GtkT s m a = GtkT { unGtkT :: ReaderT (IORef s) m a } deriving (Functor, Applicative, Monad, MonadIO)
runGtkT = runReaderT . unGtkT
instance MonadIO m => MonadState s (GtkT s m) where
get = GtkT (ask >>= liftIO . readIORef)
put s = GtkT (ask >>= liftIO . flip writeIORef s)
Pull an "inversion of control" style trick. Write a callback that prints a number, then replaces itself with a new callback that prints a higher number.
If you try to use State or StateT directly, you're gonna have a bad time.

Receiving message from channel as guard

A: if
:: q?a -> ...
:: else -> ...
fi
Note that a race condition is built-in to this type of code. How long
should the process wait, for instance, before deciding that the
message receive operation will not be executable? The problem can be
avoided by using message poll operations, for instance, as follows:
The above citation comes from http://spinroot.com/spin/Man/else.html
I cannot understand that argumentation. Just Spin can decide on q?a:
if q is empty then it is executable. Otherwise, it is blocking.
The given argument raised a race condition.
But, I can make the same argument:
byte x = 1;
A: if
:: x == 2 -> ...
:: else -> ...
fi
It is ok from point of Spin's view. But, I am asking, How long should the process wait, for instance, before deciding that the value of x will not be incremented by other process?
The argumentation is sound with respect to the semantics of Promela and the selection construct. Note that for selection, if multiple guard statements are executable, one of them will be selected non-deterministically. This in turns implies the semantics such that selection (even though it can non-deterministally execute guards) needs to determine which guards are executable at the point of invocation of the selection statement.
The question about the race condition might make more sense when considering the semantics of selection and message receives. Note that race condition in this case means that the output of the selection might depend on the time for which it needs to invoke the receive (i.e. whether it finishes at a point at which there is a message in the channel or not).
More specifically, for the selection statement, there should be no ambiguity in terms of feasible guards. Now, the message receive gets the message from the channel only if the channel is not empty (otherwise, it cannot finish executing and waits). Therefore, with respect to the semantics of receive, it is not clear whether it is executable before it is actually executed. In turn, else should execute if the receive is not executable. However, since else should execute only if ? is not executable, so to know if else is executable the program needs to know the future (or determine how much should it wait to know this, thus incurring the race condition).
Note that the argument does not apply to your second example:
byte x = 1;
A: if
:: x == 2 -> ...
:: else -> ...
fi
since here, to answer whether else is eligible no waiting is required (nor knowing the future), since the program can at any point determine if x == 2.

Create a Signal from a List

Is it possible to create a Signal from a List? Essentially what I want is something with the signature List a -> Signal a. I know that a Signal represents a time-varying value and so something like this doesn't actually make any sense (i.e. I can't think of a reason to use it in production code).
I could see applications of it for testing though. For example, imagine some function which depended on the past values of a Signal (via foldp for instance) and you wanted to make assertions about the state of the system given the signal had received values x, y, and z.
Note that there wouldn't have to be anything special about the Signal denoting that it only would ever receive a fixed number of values. I'm thinking of it more like: in production you have a Signal of mouse clicks, and you want to test that from a given starting position, after a given set of clicks, the system should be in some other known state. I know you could simulate this by calling the function a fixed number of times and feeding the results back in with new values, I'm just wondering if it is possible.
I guess it's possible. You use a time-based signal, and map the values from the list over it:
import Time
import Graphics.Element exposing (show)
list = [1..10]
signalFromList : List a -> Signal a
signalFromList list =
let
(Just h) =
List.head list
time =
Time.every Time.second
maybeFlatMap =
flip Maybe.andThen
lists =
Signal.foldp (always <| maybeFlatMap List.tail) (Just list) time
in
Signal.filterMap (maybeFlatMap List.head) h lists
main = Signal.map show <| signalFromList list
However!
It shouldn't be hard to do testing without signals. If you have a foldp somewhere, in a test you can use List.foldl over a list [x,y,z] instead. That should give you the ability to look at the state of your program after inputs x, y, z.
I don't think there is any way it can be done synchronously in pure elm (Apanatshka's answer illustrates well how to set up a sequence of events across time and why it's a bad idea). If we look at how most Signals are defined, we'll see they all head down into a native package at some point.
The question then becomes: can we do this natively?
f : List a -> Signal a
I often think of (Signal a) as 'an a that changes over time'. Here we provide a List of as, and want the function to make it change over time for us.
Before we go any further, I recommend a quick look at Native/Signal.js: https://github.com/elm-lang/core/blob/master/src/Native/Signal.js
Let's say we get down to native land with our List of as. We want something a bit like Signal.constant, but with some extra behaviour that 'sends' each a afterwards. When can we do the sending, though? I am guessing we can't do it during the signal construction function, as we're still building the signal graph. This leaves us a couple of other options:
something heinous with setTimeout, scheduling the sending of each 'a' at an appropriate point in the future
engineering a hook into the elm runtime so that we can run an arbitrary callback at the point when the signal graph is fully constructed
To me at least, the former sounds error prone, and I hope the latter doesn't exist (and never does)!
For testing, your suggestion of using a List fold to mimic the behaviour of foldp would be the way I would go.

Go - How to know when an output channel is done

I tried to follow Rob Pike's example from the talk 'Concurrency is not parallelism' and did something like this:
I'm starting many go routines as workers that read from an input channel, perform some processing and then send the result through the output channel.
Then I start another go routine that reads data from some source and send it to the workers through their input channel.
Lastly I want to iterate over all of the results in the output channel and do something with them.
The problem is that because the work is split between the workers I don't know when all of the workers have finished so I can stop asking the output channel for more results, and my program could end properly.
What is the best practice to know when workers have finished sending results to an output channel?
I personally like to use a sync.WaitGroup for that. A waitgroup is a synchronized counter that has three methods - Wait(), Done() and Add(). What you do is increment the the waitgroup's counter, pass it to the workers, and have them call Done() when they're done. Then you just block on the waitgroup on the other end and close the output channel when they're all done, causing the output processor to exit.
Basically:
// create the wait group
wg := sync.WaitGroup{}
// this is the output channel
outchan := make(chan whatever)
// start the workers
for i := 0; i < N; i++ {
wg.Add(1) //we increment by one the waitgroup's count
//the worker pushes data onto the output channel and calls wg.Done() when done
go work(&wg, outchan)
}
// this is our "waiter" - it blocks until all workers are done and closes the channel
go func() {
wg.Wait()
close(outchan)
}()
//this loop will exit automatically when outchan is closed
for item := range outchan {
workWithIt(item)
}
// TADA!
Please can I firstly clarify your terminology: a misunderstanding on the ends of channels could cause problems later. You ask about "output channels" and "input channels". There is no such thing; there are only channels.
Every channel has two ends: the output (writing) end, and the input (reading) end. I will assume that that is what you meant.
Now to answer your question.
Take the simplest case: you have only one sender goroutine writing to a channel, and you only have one worker goroutine reading from the other end, and the channel has zero buffering. The sender goroutine will block as it writes each item it until that item has been consumed. Typically this happens quickly the first time. Once the first item has passed to the worker, the worker will then be busy and the sender will have to wait before the second item can be passed over. So a ping-pong effect follows: either the writer or the reader will be busy but not both. The goroutines will be concurrent in the sense described by Rob Pike, but not always actually executing in parallel.
In the case where you have many worker goroutines reading from the channel (and its input end is shared by all of them), the sender can initially distribute one item to each worker, but then it has to wait whilst they work (similar to the ping-pong case described above). Finally, when all items have been sent by the sender, it has finished its work. However, the readers may not, yet, have finished their work. Sometimes we care that the sender finishes early, and sometimes we don't. Knowing when this happens is most easily done with a WaitGroup (see Not_a_Golfer's answer and my answer to a related question).
There is a slightly more complex alternative: you can use a return channel for signalling completion instead of a WaitGroup. This isn't hard to do, but WaitGroup is preferred in this case, being simpler.
If instead the channel were to contain a buffer, the point at which the sender had sent its last item would happen sooner. In the limit case when the channel has one buffer space per worker; this would allow the sender to complete very quickly and then, potentially, get on with something else. (Any more buffering than this would be wasteful).
This decoupling of the sender allows a fully asynchronous pattern of behaviour, beloved of people using other technology stacks (Node-JS and the JVM spring to mind). Unlike them, Go doesn't need you to do this, but you have the choice.
Back in the early '90s, as a side-effect of work on the Bulk Synchronous Parallelism (BSP) strategy, Leslie Valiant proved that sometimes very simple synchronisation strategies can be cheap. The crucial factor is that there is a need for enough parallel slackness (a.k.a. excess parallelism) to keep the processor cores busy. That means there must be plenty enough other work to be done so that it really doesn't matter if any particular goroutine is blocked for a period of time.
Curiously, this can mean that working with smaller numbers of goroutines might require more care than working with larger numbers.
Understanding the impact of excess parallelism is useful: it is often not necessary to put extra effort into making everything asynchronous if the network as a whole has excess parallelism, because the CPU cores would be busy either way.
Therefore, although it is useful to know how to wait until your sender has completed, a larger application may not need you to be concerned in the same way.
As a final footnote, WaitGroup is a barrier in the sense used in BSP. By combining barriers and channels, you are making use of both BSP and CSP.
var Z = "Z"
func Loop() {
sc := make(chan *string)
ss := make([]string, 0)
done := make(chan struct{}, 1)
go func() {
//1 QUERY
slice1 := []string{"a", "b", "c"}
//2 WG INIT
var wg1 sync.WaitGroup
wg1.Add(len(slice1))
//3 LOOP->
loopSlice1(slice1, sc, &wg1)
//7 WG WAIT<-
wg1.Wait()
sc <- &Z
done <- struct{}{}
}()
go func() {
var cc *string
for {
cc = <-sc
log.Infof("<-sc %s", *cc)
if *cc == Z {
break
}
ss = append(ss, *cc)
}
}()
<-done
log.Infof("FUN: %#v", ss)
}
func loopSlice1(slice1 []string, sc chan *string, wg1 *sync.WaitGroup) {
for i, x := range slice1 {
//4 GO
go func(n int, v string) {
//5 WG DONE
defer wg1.Done()
//6 DOING
//[1 QUERY
slice2 := []string{"X", "Y", "Z"}
//[2 WG INIT
var wg2 sync.WaitGroup
wg2.Add(len(slice2))
//[3 LOOP ->
loopSlice2(n, v, slice2, sc, &wg2)
//[7 WG WAIT <-
wg2.Wait()
}(i, x)
}
}
func loopSlice2(n1 int, v1 string, slice2 []string, sc chan *string, wg2 *sync.WaitGroup) {
for j, y := range slice2 {
//[4 GO
go func(n2 int, v2 string) {
//[5 WG DONE
defer wg2.Done()
//[6 DOING
r := fmt.Sprintf("%v%v %v,%v", n1, n2, v1, v2)
sc <- &r
}(j, y)
}
}

C Fork and Pipe closing ends

I am building an application that requires two way communication with a few child processes. My parent is like a query engine constantly reading words from stdin and passes it to each child process. The child processes perform their processing and writes back to the parent on their exclusive pipes.
This is theoretically how it should work however I am stuck on the implementation details. The first issue is do I create 2 pipes before forking the child? When I fork, I know the child is going to inherit the parent's set of file descriptors, does this mean that 4 pipes will be created or simple 2 pipes just duplicated. If they are being duplicated in the child then does this mean if I were to close a file descriptor in the child, it would also close the parent's?
My theory is the following and I simply need clarification and be put on the right track. This is untested code, I just wrote it to give you an idea of what I am thinking. Thanks, any help is appreciated.
int main(void){
int fd[2][2]; //2 pipes for in/out
//make the pipes
pipe(fd[0]); //WRITING pipe
pipe(fd[1]); //READING pipe
if(fork() == 0){
//child
//close some ends
close(fd[0][1]); //close the WRITING pipe write end
close(fd[1][0]); //close the READING pipe read end
//start the worker which will read from the WRITING pipe
//and write back to the READING pipe
start_worker(fd[0][0], fd[1][1]);
}else{
//parent
//close the reading end of the WRITING pipe
close(fd[0][0]);
//close the writing end of the READING pipe
close(fd[1][1]);
//write data to the child down the WRITING pipe
write(fd[0][1], "hello\n", 6);
//read from the READING pipe
int nbytes;
char word[MAX];
while ((nbytes = read(fd[1][0], word, MAXWORD)) > 0){
printf("Data from child is: %s\n", word);
}
}
}
The pipe itself is not duplicated upon fork.
A single pipe is unidirectional, and has 2 descriptors - one for reading, one for writing. So in process A you close e.g. write descriptor, in process B you close read descriptor -> you have a pipe from B to A.
Closing a descriptor in one process doesn't affect the descriptor in another process. After forking each process has its own descriptor space, which is a copy of parent process descriptors. Here's an excerpt from fork() man page:
The child process shall have its own copy of the parent's file
descriptors. Each of the child's file descriptors shall refer to the
same open file description with the corresponding file descriptor of
the parent.