C Fork and Pipe closing ends - process

I am building an application that requires two way communication with a few child processes. My parent is like a query engine constantly reading words from stdin and passes it to each child process. The child processes perform their processing and writes back to the parent on their exclusive pipes.
This is theoretically how it should work however I am stuck on the implementation details. The first issue is do I create 2 pipes before forking the child? When I fork, I know the child is going to inherit the parent's set of file descriptors, does this mean that 4 pipes will be created or simple 2 pipes just duplicated. If they are being duplicated in the child then does this mean if I were to close a file descriptor in the child, it would also close the parent's?
My theory is the following and I simply need clarification and be put on the right track. This is untested code, I just wrote it to give you an idea of what I am thinking. Thanks, any help is appreciated.
int main(void){
int fd[2][2]; //2 pipes for in/out
//make the pipes
pipe(fd[0]); //WRITING pipe
pipe(fd[1]); //READING pipe
if(fork() == 0){
//child
//close some ends
close(fd[0][1]); //close the WRITING pipe write end
close(fd[1][0]); //close the READING pipe read end
//start the worker which will read from the WRITING pipe
//and write back to the READING pipe
start_worker(fd[0][0], fd[1][1]);
}else{
//parent
//close the reading end of the WRITING pipe
close(fd[0][0]);
//close the writing end of the READING pipe
close(fd[1][1]);
//write data to the child down the WRITING pipe
write(fd[0][1], "hello\n", 6);
//read from the READING pipe
int nbytes;
char word[MAX];
while ((nbytes = read(fd[1][0], word, MAXWORD)) > 0){
printf("Data from child is: %s\n", word);
}
}
}

The pipe itself is not duplicated upon fork.
A single pipe is unidirectional, and has 2 descriptors - one for reading, one for writing. So in process A you close e.g. write descriptor, in process B you close read descriptor -> you have a pipe from B to A.
Closing a descriptor in one process doesn't affect the descriptor in another process. After forking each process has its own descriptor space, which is a copy of parent process descriptors. Here's an excerpt from fork() man page:
The child process shall have its own copy of the parent's file
descriptors. Each of the child's file descriptors shall refer to the
same open file description with the corresponding file descriptor of
the parent.

Related

STM32F4 UART HAL driver 'save string in variable buffer'

I am in the process of writing software for an STM32F4. The STM32 needs to pull in a string via a UART. This string is variable in length and comes in from a sensor every second. The string is stored in a fixed buffer, so the buffer content changes continuously.
The incoming string looks like this: "A12941;P2507;T2150;C21;E0;"
The settings of the UART:
Baud Rate: 19200
Word lengt: 8Bits
Parity: None
Stop Bids: 1
Over sampling: 16 Samples
Global interrupt: Enabled
No DMA settings
Part of the used code in the main.c function:
uint8_t UART3_rxBuffer[25];
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)
{
HAL_UART_Receive_IT(&huart3, UART3_rxBuffer, 25); //restart interrupt reception mode
int main(void)
{
HAL_UART_Receive_IT (&huart3, UART3_rxBuffer,25);
}
while (1)
{
}
}
Part of the code in stm32f4xx_it.c
void USART3_IRQHandler(void)
{
/* USER CODE BEGIN USART3_IRQn 0 */
/* USER CODE END USART3_IRQn 0 */
HAL_UART_IRQHandler(&huart3);
/* USER CODE BEGIN USART3_IRQn 1 */
/* USER CODE END USART3_IRQn 1 */
}
It does work to fill the buffer with the variable strings in this way, but because the buffer is constantly being replenished, it is difficult to extract a beginning and an end of the string. For example, the buffer might look like this:
[0]'E' [1]'0' [2]'/n' [3]'A' [4]'1' [5]'2' [6]'9' [7]'4' [8]'1' [9]';' [10]'P' etc....
But I'd like to have a buffer that starts on 'A'.
My question is, how can I process incoming strings on the uart correctly so that I only have the string "A12941;P2507;T2150;C21;E0;"?
Thanks in advance!!
I can see three possibilities:
Do all of your processing in the interrupt. When you get to the end of a variable-length message then do everything that you need to do with the information and then change the location variable to restart filling the buffer from the start.
Use (at least) two buffers in parallel. When you detect the end of the variable-length message in interrupt context then start filling a different buffer from position zero and signal to main context that previous buffer is ready for processing.
Use two buffers in series. Let the interrupt fill a ring buffer in a circular way that takes no notice of when a message ends. In main context scan from the end of the previous message to see if you have a whole message yet. If you do, then copy it out into another buffer in a way that makes it start at the start of the buffer. Record where it finished in the ring-buffer for next time, and then do your processing on the linear buffer.
Option 1 is only suitable if you can do all of your processing in less than the time it takes the transmitter to send the next byte or two. The other two options use a bit more memory and are a bit more complicated to implement. Option 3 could be implemented with circular mode DMA as long as you poll for new messages frequently enough, which avoids the need for interrupts. Option 2 allows to queue up multiple messages if your main context might not poll frequently enough.
I would like to share a sample code related to your issue. However it is not what you are exactly looking for. You can edit this code snippet as you wish. If i am not wrong you can also edit it according to option 3.
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)
{
if (huart->Instance == USART2) {
HAL_UART_Receive_IT(&huart2,&rData,1);
rxBuffer[pos++] = rData;
if (rData == '\n') {
pos = 0;
}
}
Before start, in the main function, before while loop you should enable interrupt for one byte using "HAL_UART_Receive_IT(&huart2,&rData,1);". If your incoming data has limiter like '\n', so you can save whole data which may have different length for each frame.
If you want data frame start with some specific character, then you can wait to save data until you get this character. In this case you can edit this code by changing '\n' as your character, and after you get that character, you should start to save following data to inside the buffer.

use select() to send signal to child process

I am trying to send some signals (e.g. SIGSTOP, SIGUSR1) from the parent process based on different user inputs to the child process. The parent process keeps waiting for user input and sends the corresponding signals to the child. If there is no user input, the child does its own job.
I put my Ocaml code here but I am not sure I used the right way to do it.
I am writing in OCaml but solutions in other languages (e.g. C/Python) are also welcome.
let cr, pw = Unix.pipe () in
let pr, cw = Unix.pipe () in
match Unix.fork () with
| 0 -> (* child *)
Unix.close pr;
Unix.close pw;
Unix.dup2 cw Unix.stdout;
Unix.execvp ... (* execute something *)
| pid -> (* parent *)
Unix.close cr;
Unix.close cw;
Unix.dup2 pr Unix.stdin;
while true do
try
match Unix.select [pr] [] [] 0.1 with
| ([], [], []) -> (* no user input *)
(* I assume it should do next iteration and wait for next user input *)
raise Exit
| (_, _, _) -> (* some user input *)
let i = read_int () in
(* send signal to the child process *)
if i = 1 then Unix.kill pid Sys.sigstop
else if i = 2 then Unix.kill pid Sys.sigusr1;
with Exit -> ()
done
Meanwhile, if I would like to define some signals (using SIGUSR1), how and where should I do it?
Thanks!
It's not very clear what you're trying to do. Here are some comments on the code you show.
The parent process seems to be reading a pipe that it has itself created (pr). But you say the parent process is waiting for user input. User input isn't going to show up in a pipe that you create yourself.
Almost always you look for user input by reading the standard input, Unix.stdin.
The code creates another pipe that appears to be intended for the use of the child process, but there are no arrangements for the file descriptors of the pipe to be accessed by the child. The child will instead read the standard input of the parent and write to the parent's standard output.
Your code calls select with a timeout of 0.1 seconds. This means the call will return 10 times per second whether there is any input in the pipe or not. Every time it returns it will write a newline. So the output will be a stream of newlines appearing at the rate of around 10 per second.
You say you want to define signals, but it's not at all clear what this means. If you mean you want to define signal handlers for the child, this isn't possible in the code you show. Handlers of signals are not preserved across a call to Unix.execvp. If you think about it, this is the only way things could work, since the execvp call obliterates all the code in the parent process and replaces it with code from some other executable file.
It is not difficult to fork a child and then send it signals. But it's not clear what you're trying to do with the pipes and with the select. And it's not clear what you expect the signals to do in the child process. If you explain these, it would be easier to give a more detailed answer.
Update
In general, it's not a good idea to modify code that you've posted to StackOverflow (other than to fix typos), because the previous comments and answers no longer make sense with the new code. It's better to post updated code.
Your new code looks more like you're trying to read input from the child through a pipe. This is more sensible, but then it is not what I would call "user input."
You haven't specified where the child's input is supposed to come from, but I suspect you are planning to send input through the other pipe.
If so, this is a notoriously unreliable setup due to buffering between the child process and its pipes. If you haven't written the code for the child process yourself, there's no way to be sure it will read the proper sized data from the read pipe and flush its output to the write pipe at appropriate times. The usual result is that things immediately stall and make no progress.
If you are writing the code for the child process, you need to make sure it reads data in sizes that are being written by the parent. If the child asks for more data than this, it will block. If the parent is waiting for the answer to appear in its read pipe, you will have deadlock (which is the usual result unless you're very careful).
You also need to make sure the child flushes its output any time it is ready to be read by the parent process. And you also need to flush the output of the parent process any time you want it to be read by the child process. (And the parent has to write data in the sizes that are expected by the child.)
You haven't explained what you're trying to do with signals, so I can't help with that. It doesn't make much sense (to me) to read integer values written by the child process and send signals to the child process in response.
Here is some code that works. It doesn't do anything with signals, because I don't understand what you're trying to do. But it sets up the pipes properly and sends some fixed-length lines of text through the child process. The child process just changes all the characters to upper case.
let cr, pw = Unix.pipe ()
let pr, cw = Unix.pipe ()
let () =
match Unix.fork () with
| 0 -> (* Child process *)
Unix.close pr;
Unix.close pw;
Unix.dup2 cr Unix.stdin;
Unix.dup2 cw Unix.stdout;
Unix.close cr;
Unix.close cw;
let rec loop () =
match really_input_string stdin 6 with
| line ->
let ucline = String.uppercase_ascii line in
output_string stdout ucline;
flush stdout;
loop ()
| exception End_of_file -> exit 0
in
loop ()
| pid ->
(* Parent process *)
Unix.close cr;
Unix.close cw;
Unix.dup2 pr Unix.stdin;
Unix.close pr;
List.iter (fun s ->
let slen = String.length s in
ignore (Unix.write_substring pw s 0 slen);
let (rds, _, _) =
Unix.select [Unix.stdin] [] [] (-1.0)
in
if rds <> [] then
match really_input_string stdin 6 with
| line -> output_string stdout line
| exception End_of_file ->
print_endline "unexpected EOF"
)
["line1\n"; "again\n"];
Unix.close pw;
exit 0
When I run it I see this:
$ ocaml unix.cma m.cmo
LINE1
AGAIN
If you change either of the test lines to be shorter than 6 bytes, you'll see the deadlock that usually happens when people try to set up this dual-pipe scheme.
Your code for sending signals looks OK, but I don't know what you expect to happen when you send one.

How to read all bytes of a file in winrt with ReadBufferAsync?

I have an array of objects that each needs to load itself from binary file data. I create an array of these objects and then call an AsyncAction for each of them that starts it reading in its data. Trouble is, they are not loading entirely - they tend to get only part of the data from the files. How can I make sure that the whole thing is read? Here is an outline of the code: first I enumerate the folder contents to get a StorageFile for each file it contains. Then, in a for loop, each receiving object is created and passed the next StorageFile, and it creates its own Buffer and DataReader to handle the read. m_file_bytes is a std::vector.
m_buffer = co_await FileIO::ReadBufferAsync(nextFile);
m_data_reader = winrt::Windows::Storage::Streams::DataReader::FromBuffer(m_buffer);
m_file_bytes.resize(m_buffer.Length());
m_data_reader.ReadBytes(m_file_bytes);
My thought was that since the buffer and reader are class members of the object they would not go out of scope and could finish their work uninterrupted as the next objects were asked to load themselves in separate AsyncActions. But the DataReader only gets maybe half of the file data or less. What can be done to make sure it completes? Thanks for any insights.
[Update] Perhaps what is going is that the file system can handle only one read task at a time, and by starting all these async reads each is interrupting the previous one -? But there must be a way to progressively read a folder full of files.
[Update] I think I have it working, by adopting the principle of concentric loops - the idea is not to proceed to the next load until the first one has completed. I think - someone can correct me if I'm wrong, that the file system cannot do simultaneous reads. If there is an accepted and secure example of how to do this I would still love to hear about it, so I'm not answering my own question.
#include <wrl.h>
#include <robuffer.h>
uint8_t* GetBufferData(winrt::Windows::Storage::Streams::IBuffer& buffer)
{
::IUnknown* unknown = winrt::get_unknown(buffer);
::Microsoft::WRL::ComPtr<::Windows::Storage::Streams::IBufferByteAccess> bufferByteAccess;
HRESULT hr = unknown->QueryInterface(_uuidof(::Windows::Storage::Streams::IBufferByteAccess), &bufferByteAccess);
if (FAILED(hr))
return nullptr;
byte* bytes = nullptr;
bufferByteAccess->Buffer(&bytes);
return bytes;
}
https://learn.microsoft.com/en-us/cpp/cppcx/obtaining-pointers-to-data-buffers-c-cx?view=vs-2017
https://learn.microsoft.com/en-us/windows/uwp/xbox-live/storage-platform/connected-storage/connected-storage-using-buffers

After a process calls syscall wait(), who will wake it up?

I have a general idea that a process can be in ready_queue where CPU selects candidate to run next. And there are these other queues on which a process waits for (broadly speaking) events. I know from OS courses long time ago that there are wait queues for IO and interrupts. My questions are:
There are many events a process can wait on. Is there a wait queue corresponding to each such event?
Are these wait queues created/destroyed dynamically? If so, which kernel module is responsible for managing these queues? The scheduler? Are there any predefined queues that will always exist?
To eventually get a waiting process off a wait queue, does the kernel have a way of mapping from each actual event (either hardware or software) to the wait queue, and then remove ALL processes on that queue? If so, what mechanisms does a kernel employ?
To give an example:
....
pid = fork();
if (pid == 0) { // child process
// Do something for a second;
}
else { // parent process
wait(NULL);
printf("Child completed.");
}
....
wait(NULL) is a blocking system call. I want to know the rest of the journey the parent process goes through. My take of the story line is as follows, PLEASE correct me if I miss crucial steps or if I am completely wrong:
Normal system call setup through libc runtime. Now parent process is in kernel mode, ready to execute whatever is in wait() syscall.
wait(NULL) creates a wait queue where the kernel can later find this queue.
wait(NULL) puts the parent process onto this queue, creates an entry in some map that says "If I (the kernel) ever receives an software interrupt, signal, or whatever that indicates that the child process is finished, scheduler should come look at this wait queue".
Child process finishes and kernel somehow noticed this fact. Kernel context switches to scheduler, which looks up in the map to find the wait queue where the parent process is on.
Scheduler moves the parent process to ready queue, does its magic and sometime later the parent process is finally selected to run.
Parent process is still in kernel mode, inside wait(NULL) syscall. Now the main job of rest of the syscall is to exit kernel mode and eventually return the parent process to user land.
The process continues its journey on the next instruction, and may later be waiting on other wait queues until it finishes.
PS: I am hoping to know the inner workings of the OS kernel, what stages a process goes through in the kernel and how the kernel interact and manipulate these processes. I do know the semantics and the contract of the wait() Syscall APIs and that is not what I want to know from this question.
Let's explore the kernel sources. First of all, it seems all the
various wait routines (wait, waitid, waitpid, wait3, wait4) end up in the
same system call, wait4. These days you can find system calls in the
kernel by looking for the macros SYSCALL_DEFINE1 and so, where the number
is the number of parameters, which for wait4 is coincidentally 4. Using the
google-based freetext search in the Free Electrons Linux Cross
Reference we eventually find the definition:
1674 SYSCALL_DEFINE4(wait4, pid_t, upid, int __user *, stat_addr,
1675 int, options, struct rusage __user *, ru)
Here the macro seems to split each parameter into its type and name. This
wait4 routine does some parameter checking, copies them into a wait_opts
structure, and calls do_wait(), which is a few lines up in the same file:
1677 struct wait_opts wo;
1705 ret = do_wait(&wo);
1551 static long do_wait(struct wait_opts *wo)
(I'm missing out lines in these excerpts as you can tell by the
non-consecutive line numbers).
do_wait() sets another field of the structure to the name of a function,
child_wait_callback() which is a few lines up in the same file. Another
field is set to current. This is a major "global" that points to
information held about the current task:
1558 init_waitqueue_func_entry(&wo->child_wait, child_wait_callback);
1559 wo->child_wait.private = current;
The structure is then added to a queue specifically designed for a process
to wait for SIGCHLD signals, current->signal->wait_chldexit:
1560 add_wait_queue(&current->signal->wait_chldexit, &wo->child_wait);
Let's look at current. It is quite hard to find its definition as it
varies per architecture, and following it to find the final structure is a
bit of a rabbit warren. Eg current.h
6 #define get_current() (current_thread_info()->task)
7 #define current get_current()
then thread_info.h
163 static inline struct thread_info *current_thread_info(void)
165 return (struct thread_info *)(current_top_of_stack() - THREAD_SIZE);
55 struct thread_info {
56 struct task_struct *task; /* main task structure */
So current points to a task_struct, which we find in sched.h
1460 struct task_struct {
1461 volatile long state; /* -1 unrunnable, 0 runnable, >0 stopped */
1659 /* signal handlers */
1660 struct signal_struct *signal;
So we have found current->signal out of current->signal->wait_chldexit,
and the struct signal_struct is in the same file:
670 struct signal_struct {
677 wait_queue_head_t wait_chldexit; /* for wait4() */
So the add_wait_queue() call we had got to above refers to this
wait_chldexit structure of type wait_queue_head_t.
A wait queue is simply an initially empty, doubly-linked list of structures that contain a
struct list_head types.h
184 struct list_head {
185 struct list_head *next, *prev;
186 };
The call add_wait_queue()
wait.c
temporarily locks the structure and via an inline function
wait.h
calls list_add() which you can find in
list.h.
This sets the next and prev pointers appropriately to add the new item on
the list.
An empty list has the two pointers pointing at the list_head structure.
After adding the new entry to the list, the wait4() system call sets a
flag that will remove the process from the runnable queue on the next
reschedule and calls do_wait_thread():
1573 set_current_state(TASK_INTERRUPTIBLE);
1577 retval = do_wait_thread(wo, tsk);
This routine calls wait_consider_task() for each child of the process:
1501 static int do_wait_thread(struct wait_opts *wo, struct task_struct *tsk)
1505 list_for_each_entry(p, &tsk->children, sibling) {
1506 int ret = wait_consider_task(wo, 0, p);
which goes very deep but in fact is just trying to see if any child already
satisfies the syscall, and we can return with the data immediately. The
interesting case for you is when nothing is found, but there are still running
children. We end up calling schedule(), which is when the process gives
up the cpu and our system call "hangs" for a future event.
1594 if (!signal_pending(current)) {
1595 schedule();
1596 goto repeat;
1597 }
When the process is woken up, it will continue with the code after
schedule() and again go through all the children to see if the wait
condition is satisfied, and probably return to the caller.
What wakes up the process to do that? A child dies and generates a SIGCHLD
signal.
In signal.c
do_notify_parent() is called by a process as it dies:
1566 * Let a parent know about the death of a child.
1572 bool do_notify_parent(struct task_struct *tsk, int sig)
1656 __wake_up_parent(tsk, tsk->parent);
__wake_up_parent() calls __wake_up_sync_key() and uses exactly the
wait_chldexit wait queue we set up previously.
exit.c
1545 void __wake_up_parent(struct task_struct *p, struct task_struct *parent)
1547 __wake_up_sync_key(&parent->signal->wait_chldexit,
1548 TASK_INTERRUPTIBLE, 1, p);
I think we should stop there, as wait() is clearly one of the more
complex examples of a system call and the use of wait queues. You can find
a simpler presentation of the mechanism in this 3 page Linux Journal
article from 2005. Many things
have changed, but the principle is explained. You might also buy the books
"Linux Device Drivers" and "Linux Kernel Development", or check out the
earlier editions of these that can be found online.
For the "Anatomy Of A System Call" on the way from user space to the kernel
you might read these lwn articles.
Wait queues are heavily used throughout the kernel whenever a task,
needs to wait for some condition. A grep through the kernel sources finds
over 1200 calls of init_waitqueue_head() which is how you initialise a
waitqueue you have dynamically created by simply kmalloc()-ing the space
to hold the structure.
A grep for the DECLARE_WAIT_QUEUE_HEAD() macro finds over 150 uses of
this declaration of a static waitqueue structure. There is no intrinsic
difference between these. A driver, for example, can choose either method
to create a wait queue, often depending on whether it can manage
many similar devices, each with their own queue, or is only expecting one device.
No central code is responsible for these queues, though there is common
code to simplify their use. A driver, for example, might create an empty
wait queue when it is installed and initialised. When you use it to read data from some
hardware, it might start the read operation by writing directly into the
registers of the hardware, then queue an entry (for "this" task, i.e. current) on its wait queue to give up
the cpu until the hardware has the data ready.
The hardware would then interrupt the cpu, and the kernel would call the
driver's interrupt handler (registered at initialisation). The handler code
would simply call wake_up() on the wait queue, for the kernel to
put all tasks on the wait queue back in the run queue.
When the task gets the cpu again, it continues where it left off (in
schedule()) and checks that the hardware has completed the operation, and
can then return the data to the user.
So the kernel is not responsible for the driver's wait queue, as it only
looks at it when the driver calls it to do so. There is no mapping from the
hardware interrupt to the wait queue, for example.
If there are several tasks on the same wait queue, there are variants of
the wake_up() call that can be used to wake up only 1 task, or all of
them, or only those that are in an interruptable wait (i.e. are designed to
be able to cancel the operation and return to the user in case of a
signal), and so on.
In order to wait for a child process to terminate, a parent process will just execute a wait() system call. This call will suspend the parent process until any of its child processes terminates, at which time the wait() call returns and the parent process can continue.
The prototype for the wait( call is:
#include <sys/types.h>
#include <sys/wait.h>
pid_t wait(int *status);
The return value from wait is the PID of the child process which terminated. The parameter to wait() is a pointer to a location which will receive the child's exit status value when it terminates.
When a process terminates it executes an exit() system call, either directly in its own code, or indirectly via library code. The prototype for the exit() call is:
#include <std1ib.h>
void exit(int status);
The exit() call has no return value as the process that calls it terminates and so couldn't receive a value anyway. Notice, however, that exit() does take a parameter value - status. As well as causing a waiting parent process to resume execution, exit() also returns the status parameter value to the parent process via the location pointed to by the wait() parameter.
In fact, wait() can return several different pieces of information via the value to which the status parameter points. Consequently, a macro is provided called WEXITSTATUS() (accessed via ) which can extract and return the child's exit status. The following code fragment shows its use:
#include <sys/wait.h>
int statval, exstat;
pid_t pid;
pid = wait(&statval);
exstat = WEXITSTATUS(statval);
In fact, the version of wait() that we have just seen is only the simplest version available under Linux. The new POSIX version is called waitpid. The prototype for waitpid() is:
#include <sys/types.h>
#include <sys/wait.h>
pid_t waitpid(pid_t pid, int *status, int options);
where pid specifies what to wait for, status is the same as the simple wait() parameter and options allows you to specify that a call to waitpid() should not suspend the parent process if no child process is ready to report its exit status.
The various possibilities for the pid parameter are:
< -1 wait for a child whose PGID is -pid
-1 same behavior as standard wait()
0 wait for child whose PGID = PGID of calling process
> 0 wait for a child whose PID = pid
The standard wait() call is now redundant as the following waitpid() call is exactly equivalent:
#include <sys/wait.h>
int statval;
pid_t pid;
pid = waitpid(-1, &statval, 0);
It is possible for a child process which only executes for a very short time to terminate before its parent process has had the chance to wait() for it. In these circumstances the child process will enter a state, known as a zombie state, in which all its resources have been released back to the system except for its process data structure, which holds its exit status. When the parent eventually wait()s for the child, the exit status is delivered immediately and then the process data structure can also be released back to the system.

Watching for file size changes

I need to watch a folder for files- files will drop into folders, and it may take several seconds or a few minutes for the file copy to be complete. I've read multiple topics on SO (Checking File sizes for changes, Detect file in use by other process). Neither of these give a great answer.
Polling is "bad", but how can I know if a file stops increasing in size? Specifically, is there a notification for "file size is constant" or "file is complete"? Can the OS notify of non-activity (IOW, how do you prove a negative?). It would seem to me that logically, one MUST poll a file to see if it's not changing. I've also checked SCEvents and UKKQueue, but again both only notify of a change. UKKQueue has a "file size increased" method, but no "file size has not increased method".
Is there really any way to detect file copy completion without polling or using sleep()?
This is the code I used to monitor file locally. I am not sure if this would work for you.
int fileHander = open("/location/file", O_RDONLY);
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
unsigned long mask = DISPATCH_VNODE_DELETE | DISPATCH_VNODE_WRITE | DISPATCH_VNODE_EXTEND | DISPATCH_VNODE_ATTRIB | DISPATCH_VNODE_LINK | DISPATCH_VNODE_RENAME | DISPATCH_VNODE_REVOKE;
__block dispatch_source_t source;
void (^changeHandler)(void) = ^{
unsigned long l = dispatch_source_get_data(source);
if (l & DISPATCH_VNODE_DELETE) {
printf("file deleted");
dispatch_source_cancel(source);
}
else {
printf("file data changed");
}
};
void (^cancelHandler)(void) = ^{
int fileHander = dispatch_source_get_handle(source);
close(fileHander);
};
source = dispatch_source_create(DISPATCH_SOURCE_TYPE_VNODE,fileHander, mask, queue);
dispatch_source_set_event_handler(source, changeHandler);
dispatch_source_set_cancel_handler(source, cancelHandler);
dispatch_resume(source);
If you have control over the files being delivered, copy them into a temporary directory on the same volume, and when the copy is done, then move the file. The move simply relinks the file in the file system. Then, kqueue can notify you when the file is present. It's presence means the whole file is there.
BTW, when you use the "atomically" version of the cocoa file manager API, this is pretty much what it does behind the scenes.
If you don't have control, and you just want to monitor files, then use kqueue to notify you when a file shows up. As the file grows, queue will notify you that it has been extended. However, it does not know if some other app is done extending the file or not, so you still have to have some sort of timer to check for change.
I would kick off an interval timer at the same time I register for kqueue NOTE_EXTEND events. I would keep track of the last time I saw a NOTE_EXTEND event, and if I had not seen one since the last timer fired, I would assume the file has stopped being extended.
Now, you have to determine what timer value to use... and if you want to back off and keep looking for a "while" but unless the application copying the file does so via a "move" or unless it sends a notification that the file has been fully copied, you are going to have to do some type of timer, with some arbitrary value... at which time you assume it's done.
Obviously, if you can fit into the first option, things are much better as you have a deterministic way of knowing that the file has stopped growing.