NETLINK input function in kernel - netlink

When we invoke sendmsg API call from user process, input function is invoked and we have sent message to kernel. Ok, but when we call recvmsg API call, is input function invoked again? I saw this on example that I can not comment because I don't have reputations. Title of that post is: "How to use netlink socket to communicate with a kernel module?" So, could anyone see that example and tell me how to distinguish things between writing to kernel socket and reading from it.

Why would the input function be invoked again? sendmsg() sends and recvmsg() receives. The hello_nl_recv_msg() is only executed when the kernel module receives a message.
In that example, the userspace program sends message A to the kernel using the sendmsg() function.
Message A arrives to the kernel. The kernel calls hello_nl_recv_msg(). Message A is encapsulated in the argument, skb.
The kernel module chooses to send a response to the process whose process ID is the one that sent skb. It creates message B. The kernel module sends message B to userspace using the nlmsg_unicast() function.
Message B appears in userspace during the recvmsg() function. (Because the process ID of the userspace program is the same the kernel module wrote to.)
recvmsg() sleeps until a message to the kernel is received, so you don't have to worry whether the kernel has already answered or not before you call that function.

Related

asio - the design reason of async_write_some may not transmit all of the data

From user view, the property of "may not transmit all of the data" is a trouble thing. That will cause handler calls more than one time(may be).
The free function async_write ensure handler call only once, but it requires caller must call it in sequence or the data written will be interleaving. For network application usage, this is more bad than handler be called more than once.
If user want to handler called only once and data written is correct, user need to to do something.
I want to ask is: why asio not just make socket::async_write_some transmit all data?
I want to ask is: why asio not just make socket::async_write_some
transmit all data?
Opposed to async_write, socket::async_write_some is lower-level method.
The OS network stack is designed with send buffers and receive buffers. This buffers are required to be limited with some amount of memory. When you send many data over socket, receiving side can be more slow than sending and/or there can be network speed issues.
This is the reason why socket send buffers are limited and as a result system's syscalls like write or writev should be able to notify user program that system cannot accept chunk of data right now. With socket in async mode its even more critical. So, socket syscalls cannot work in async manner without signaling program to hold on.
So, the async_write_some as a mid-level wrapper to writev is required to support partial writes. In other hand async_write is composed operation and can call async_write_some many times in order to send buffers until operation is complete or possibly failed. It calls completion handler only once, not for each chunk of data passed to network stack.
If user want to handler called only once and data written is correct,
user need to to do something.
Nothing special, just to use async_write, not socket::async_write_some.

Using the Asynchronous Programming Model (APM) in a WCF operation, what's going on "under the hood"?

Given an operation such as this:
public void DoSomething()
{
IWcfServiceAgentAsync agent = new WcfServiceAgentProxy();
var request = new DoSomethingRequest();
agent.BeginDoSomething(request,
iar =>
{
var response = agent.EndDoSomething(iar);
/*
* Marshal back on to UI thread with results
*/
}, null);
}
What is really going on underneath the hood between the moment that the operation gets started, and the callback is executed? Is there a socket that's getting polled waiting for completion? Is there an underlying OS thread that gets blocked until it's return?
What happens is BeginDoSomething ends up calling base.Channel.BeginGetTest(callback, asyncState); on the WCF proxy. What that proxy then does is go through each part of the "Binding Stack" you have set up for your WCF communication.
One of the main parts of the binding stack your request will pass through is the "Message Encoder". The message encoder packages your request up in to something that can be represented as a byte[] (This process is called Serializing).
Once through the message encoder your request will be sent to the Transport (be it HTTP, TCP, or something else). The transport takes the byte[] and sends it to your target endpoint, it then tells the OS "When you receive a response directed to me, call this function" via the IO Completion Ports system. (I will assume either TCP or HTTP binding for the rest of this answer) (EDIT: Note, IO Completion ports don't have to be used, it is up to the Transport layer to decide what to do, it is just most of the implementations built in to the framework will use that)
In the time between your message was sent and the response was received no threads, no poling, no nothing happens. When the network card receives a response it raises a interrupt telling the OS it has new information, that interrupt is processed and eventually the OS sees that it is a few bytes of data intended for your application. The OS then tells your application to start up a IOCP thread pool thread and passes it the few bytes of data that was received.
(See "There is no Thread" by Stephen Cleary for more info about this processes. It is talking about TPM instead of APM like in your example but the underlying layers are exactly the same.)
When the bytes from the other computer are received those bytes go back up the stack in the opposite direction. The IOCP thread runs a function from the Transport, it takes the bytes that was passed to it and hands it off to the Message Encoder. This action can happen several times!
The message encoder receives the bytes from the transport and tries to build up a message, if not enough bytes have been received yet it just queues them and waits for the next set of bytes to be passed in. Once it has enough bytes to desearalize the message it will create a new "Response" object and set it as the result of the IAsyncResult, then (I am not sure who, it may be the WCF call stack, it may be somewhere else in .NET) sees that your IAsyncResult had a callback delegate and starts up another IOCP thread and that thread is the thread your delegate is run on.

UDP empty buffer ReceiveAsync

I used SocketClient.cs from this thread and very similar from msdn.
Can somebody tell me why buffer is empty after packets are received?
I have host aplication on windows 8, and then i send from Phone packet with some kind of information. Then host reply to me with new packet. Method 'Receive' receives empty information. Buffer is empty. How to fix that?
If you are not reacting to the Completed event of the SAEA object, no data has been received. If you are, then you received an empty message or your buffersize was 0. This is what the docs are telling you.
I had a look at the code in your link and found that it is using a ManualResetEvent with the SendToAsync method. I don't know why it is doing this but it may be one cause, depending on the timeout specified.
I guess not everyone is reading through the docs for the SAEA object, but you have to think about it as a thread synchronization object. It is sent to a thread, does its work there and signals finish, that just it. Maybe this is the issue with the code in your linked post, the thread that should receive the signal from the SAEA object is busy till the Reset method is called. If so, no event from the SAEA object that is working in another thread is getting through.
Also note that SendToAsync may return immediately with false if the result is available at the time of the call. You can examine the result right away. So you would safely call it like
if (!_socket.SendToAsync(myEventArgs))
ProcessResult(myEventArgs);
So the basic idea is: If you use the SocketAsyncEventArgs, don't use threading. The Async socket methods try to make the threading transparent to the user, and you would just add a threading layer on top of this. This is likely to get you in trouble.

Cancel thread with read() operation on serial port

in my Cocoa project, I communicate with a device connected to a serial port. Now, I am waiting for the serial device to send a particular message of some bytes. For the read operation (and the reaction for once the desired message has been received), I created a new thread. On user request, I want to be able to cancel the thread.
As Apple suggests in the docs, I added a flag to the thread dictionary, periodically check if the flag has been set and if so, call [NSThread exit]. This works fine.
Now, the thread may be stuck waiting for the serial device to finally send the 12 byte message. The read call looks like this:
numBytes = read(fileDescriptor, buffer, 12);
Once the thread starts reading from the device, but no data comes in, I can set the flag to tell the thread to finish, but the thread is not going to read the flag unless it finally received at least 12 bytes of data and continues processing.
Is there a way to kill a thread that currently performs a read operation on a serial device?
Edit for clarification:
I do not insist in creating a separate thread for the I/O operations with the serial device. If there is a way to encapsulate the operations such that I am able to "kill" them if the user presses a cancel button, I am perfectly happy.
I am developing a Cocoa application for desktop Mac OS X, so no restrictions regarding mobile devices and their capabilities apply.
A workaround would be to make the read function return immediately if there are no bytes to read. How can I do this?
Use select or poll with a timeout to detect when the descriptor is ready for reading.
Set the timeout to (say) half a second and call it in a loop while checking to see if your thread should exit.
Asynchronous thread cancellation is almost always a bad idea. Try to stick with event-driven interfaces (and, if necessary, timeouts).
This is exactly what the pthread_cancel interface was designed for. You'll want to wrap the block with read in pthread_cleanup_push and pthread_cleanup_pop in order that you can safely clean up if the thread is cancelled, and also disable cancellation (with pthread_setcancelstate) in other code that runs in this thread that you don't want to be cancellable. This can be a pain if proper cleanup would involve multiple call frames; it essentially forces you to use pthread_cleanup_push at every call level and structure your thread code like C++ or Java with try/catch style exception handling.
An alternative approach would be to install a signal handler for an otherwise-unused signal (like SIGUSR1 or one of the realtime signals) without the SA_RESTART flag, so that it interrupts syscalls with EINTR. The signal handler itself can be a complete no-op; the only purpose of it is to interrupt things. Then you can use pthread_kill to interrupt the read (or any other syscall) in a particular thread. This has the advantage that you don't have to switch your code to using C++/Java-type idioms. You can handle the EINTR error by checking a flag (indicating whether the thread was requested to abort) and resume the read if the flag is not set, or return an error code that causes the caller to clean up and eventually pthread_exit.
If you do use interrupting signal handlers, make sure all your syscalls that can return EINTR are wrapped in loops that retry (or check the abort flag and optionally retry) on EINTR. Otherwise things can break badly.

Function running confuse on VxWorks

We're trying to use VxWorks' UDP multicast.
Using the command line (->), we call the initialization function with some parameters and the multicast runs successfully.
When I try to run this method from code, the initialization function returns OK (no errors), but does not initialize the multicast UDP port.
Is there a catch ?
One thing to be aware of is that the TCP/IP stack gets initialized after the rootTask completes.
The usrAppInit function runs in the context of the root task. If you are invoking network stack elements in usrAppInit, things might not work.
Make sure you invoke your networking code from a task that has been spawned with a lower priority than the network stack (which runs at priority 50).