How CFReadStreamRead distinguishes packets when reading? - objective-c

I feel I'm missing some fundamental info, however, the problem I'm facing is I'm getting one data packet that consists of two merged other packets. Server API works correct, this merge happens only when multiple writes and read occur at the same time. So, the question is if a server performs several writes via socket, who is responsible for distinguishing where a packet starts, ends and where second packet starts? Me or CFReadStreamRead?
Currently, I'm using AsyncSocket that is a wrapper around CFNetwork but IMHO the problem is somehow related to CFReadStreamRead itself: sometimes it reads two packets as one, instead of reading them separately. I was unable to find relative examples and info about how the flow goes when multiple writes and reads occur at the same time, and do I need explicitly do something to grab the packet or do I need manually to split those packets from coming data stream? Though CFReadStreamRead already does that but maybe I'm missing smth.

It's not clear what you mean here by "packets" and I suspect you're using the term very loosely. The place that packet (i.e. IP packet) boundaries are determined is at the network layer, which is well below any of the things you're talking about.
At the layer you're talking about, all writes are going to be joined together. Bytes are bytes. A single write does not in any way suggest a single read. Multiple writes may map to a single read or vice versa. It is up to the protocol to indicate where boundaries are between things the protocol thinks of as separate messages. In the situation you're describing, that means this is an issue between you and your server.

Related

What happens at the receive part when we write with SPI?

When SPI master writes to the slave, something is shifting into the receive buffer right?
If yes, then it is normal "RXDATAAVAILABLE" flag to be set? It is nonsense! We send data, and when data is sent, we get notified that there is data received.
If all of my statements are correct, then how do we know what the correct data is into the RXFIFO?
Suppose we send two bytes frame. The first one is the address and the second one is dummy in order to read the value in that address (of the slave). Then suppose we have two levels Rx FIFO. In that FIFO instead the value read from the slave, we have two bytes, the first is who knows what, and the second the value read from the slave.
So the question is: how do we manage to receive only what is necessary, without getting junk data during the write part of the frame?
SPI works like simple 8 bit shift registers. You shift out bytes on MOSI at each flank of a clock and at the same time you shift in new data from MISO. Thus you send and receive at the same time. Hence the names MOSI = Master Out Slave In, and MISO = Master In Slave Out.
SPI peripherals on microcontrollers are more intricate than that though, and have separate data registers that are different from the actual hardware shift register, so that we can write data without worrying about the pending transmission. Some may even have multiple data buffers. But on the fundamental level, SPI always work with 8 bits.
When the microcontroller acting as SPI master writes something, there is usually two flags, one that says that the data buffer is made available, and another that tells that transmission is done.
When you are done sending, you are also done receiving. You'll get some sort of flag set. This is assuming that all devices are implementing SPI as intended, which is often not the case.
Note that some devices implement a system where you first send x bytes of data, and after that receive x bytes of data. This seems to be the scenario you describe. Sending and receiving is not done at the same time for that device, but instead in sequence. Meaning that during the first transmission, you'll clock in garbage, and then in order to receive data, you must clock out garbage. This is no fault of SPI, but how the manufacturer of the specific device has specified things.
Note that SPI is very poorly standardized and therefore all manner of weird crap exists on the market. The manner of sending/receiving data may vary, the clock polarity (flanks) may vary, where the device clocks the data may vary. Some devices might need delays between data bytes. Some devices might need some obscure handling of the Slave Select pin in order to work. It's all one big mess and the lack of international standardization is to blame.
An SPI master engine's received data available flag will be set as a simple result of the occurrence of a word's worth of clock cycles generated by the master itself. It tells you nothing about the operation or even existence of a peripheral on the bus.
When this flag is set, it is entirely up to you and your software to know if the contents of the received data register will have meaning or not.
If you have properly selected and interacted with an existent, operational peripheral in a read or transfer operation where it is documented to give a result, they will have meaning
If you have performed a purely write operation to a peripheral for which no reply data is documented at the word position in question, it will be meaningless, effectively no different than reading some random legal memory location. Note that in most cases, a write operation is simply a transfer where the received data is to be ignored - at implementation level there is usually no other difference.
If you have failed to address any existent peripheral it will be similarly be meaningless.
As with any other memory or read operation, it is up to you to know if the contents of the register in a particular situation are meaningful or not.
Since you know that the first byte contains "who knows what" while the second has meaning, write your software to ignore the first and use the second.
(As an aside, many, though by no means all, SPI peripherals are documented to shift out whatever constitutes their primary status register during the address phase, since that makes for a quick way to poll it)

Microcontroller to microcontroller communication library (over UART/RS232)

I want to interface two microcontrollers with a UART interface and I search a protocol to exchange data between them.
In practice, I want to exchange data periodically (ie: sensors reading) and also data on event (GPIO state). I have around 100-200 bytes to exchange every 100 milli second.
Does anybody know a protocol or library to achieve this kind of task ?
For now, I see protobuf and nano protobuff ? Is there something else ?
It would be nice if I could add a software layer over the UART and use "virtual data stream" like if it was a TCP/IP connection to N ports.
Any idea ?
Thanks
I think the most straight forward way is to roll your own.
You'll find RS232 drivers in the manufacturers chip support library.
RS232 is a stream oriented transport, that means you will need to encode your messages into some frameing structure when you send them and detect frame boundaries on the receiver side. A clever and easy to use mechanism to do this is "Consistent Overhead Byte Stuffing".
https://en.wikipedia.org/wiki/Consistent_Overhead_Byte_Stuffing
This simple algorithm turns zeros in your messages into some other value, so the zero-byte can be used to detect start and end of frame. If a byte gets corrupted on the way you can even resynchronize to the stream and keep going.
The code on Wikipedia should be easy enough even for the smallest micro-processors.
Afterwards you can define your message format. You can probably keep it very simple and directly send your data-structures as is.
Suggestion for a simple message format:
Byte-ID Meaning
---------------------------------
0 Destination port number
1 message type (define your own)
2 to n message data
If you want to send variable length messages you can either send out a length byte or derive the length from the output of the Constant Overhead Byte Stuffing framing.
By the way, UART/RS232 is nice and easy to work with, but you may also want to take a look at SPI. The SPI interface is more suitable to exchange data between two micro-controllers. It is usually faster than RS232 and more robust because it has a dedicated clock-line.
How about this: eRPC https://community.nxp.com/docs/DOC-334083
The eRPC (Embedded Remote Procedure Call) is a Remote Procedure Call (RPC) system created by NXP. An RPC is a mechanism used to invoke a software routine on a remote system using a simple local function call. The remote system may be any CPU connected by an arbitrary communications channel: a server across a network, another CPU core in a multicore system, and so on. To the client, it is just like calling a function in a library built into the application. The only difference is any latency or unreliability introduced by the communications channel.
I have use it in a two processor embedded system, a cortext-A9 CPU with a Context-M4 MCU, which communicate each other with SPI/GPIO.
Erpc can run over UART, SPI, rpmsg and network(tcp). even when using serial or SPI as transport tunnel, it can do bidirectional
calls and with very minimal footprint.
Simple serial point-to-point communication protocol
http://www.zipplet.co.uk/index.php/content/openformats_mise
It depends if you need master/slave implementation, noise protection, point-point or multi-point (and in this case collision detection), etc
but, as our colleague said, I would go with the simplest solution that fits the problem, following the KISS principle http://en.wikipedia.org/wiki/KISS_principle
Just add some header information like ID and length, if necessary CRC checking, and be happy :)
Try Microcontroller Interconnect Network (MIN) 1.0:
https://github.com/min-protocol/min
It has framing using byte-stuffing to keep receiver sync, 16-bit Fletcher's algorithm for checksum, an identifier for use by the application and a variable payload of up to 15 bytes.
There's embedded C code there plus also a Python implementation to make it easier to talk to a PC.
As the first answer starts, the simplest result is to roll your own. Define your header (the "format" above) as needed, perhaps including status information so each processor knows that the other is working properly. I have had success with a protocol that includes
2 byte ascii prefix and suffix such as "[" and "]" so that a
protocol analyzer can show you message boundaries.
The number of bytes.
The command ID (parsed to indicate what command handler to use.
Command arguments (I used 3 32 bit words).
A CRC or checksum to verify transfer integrity
The parser then recognizes the [* as the start of the message, and dispatches the body to the command handler for the particular command ID with the associated arguments as long as the checksum matches.

WasapiLoopbackCapture to WaveOut

I'm using WasapiLoopbackCapture to capture sound coming from my speakers and then using onDataAvailable to send it to another device and I'm attempting to play the data sent using the WaveOut class and a BufferedWaveProvider and just adding a sample everytime data is sent from my client using the onDataAvailable. I'm having problems sending sound. The most functioning I've managed to get it is:
Not syncing the Wave format of the client and the server, just sending data and adding it to the sample. Problem is this is stutters very much even though I checked the buffer stored size and it has 51 seconds. I even have to increase the buffer size which eventually overflows anyway.
I tried syncing the Wave format and I just get clicks but have no problem with buffer size. I also tried making sure that at least a second was stored in the buffer but that had zero effect.
If anyone could point me in the right direction that would be great.
Uncompressed audio takes up a lot of space on a network. On my machine the WasapiLoopbackCapture object produces 32-bit (IeeeFloat) stereo samples at 44100 samples per second, for around 2.7Mbit/sec total raw bandwidth. Once you factor in TCP packet overheads and so on, that's quite a lot of data you're transferring.
The first thing I would suggest though is that you plug in some profiling code at each step in the process to get an idea of where your bottlenecks are happening. How fast is data arriving from the capture device? How big are your packets? How long does it take to service each call to your OnDataAvailable event handler? How much data are you sending per second across the network? How fast is the data arriving at the client? Figure out where the bottlenecks are and you get a much better idea of what the bottlenecks are.
Try building a simulated server that reads data from a wave file in various WaveFormats (channels, bits per sample and sample rate) and simulates sending that data across the network to the client. You might find that the problem goes away at lower bandwidth. And if bandwidth is the issue, compression might be the solution.
If you're using a single-threaded model, and servicing each OnDataAvailable event takes longer than the recording frequency (ie: number of expected calls to OnDataAvailable per second) then there's going to be a data loss issue. Multiple threads can help with this - one to get the data from the audio system, another to process and send the data. But you can end up in the same position: losing data because you're not dealing with it quickly enough. When that happens it's handy to know about it, because it indicates a problem in the program. Find out when and where it happens - overflow in input, processing or output buffers all have different potential reasons and need different attention.

Grand Unified Theory of logging

Is their a Grand Unified Theory of logging? Shall we develop one? Question (just to show this is not a discussion :), how can I improve on the following? (note that I live mainly in the embedded world, but non-embedded suggestions are also welcome)
How do you log, when do you log, what do you log, what do you do with log files?
How do you log - I generally have macros, #ifdef TESTING, sort of thing. They write to RAM and a low priority process writes them out when the system is idle (using UDP, since I do embedded systems)
When do you log - same as voting, early and often. At every (in)significant program event, I log at varying levels. Events received, transaction succeed/fail, data updated, etc
What do you log - Fatal/Error/Warning/Info/Debug/Trace is covered in When to use the different log levels?
What do you do with log files - 1) keep them (in CVS), both pass and fail 2) capture everything and filter later in case I can't repeat a problem. I have tools to filter the log by "level" (Fatal/Error/etc), process, file, etc. And to draw message sequence charts, dump data structures, draw histograms of memory usage - what am I missing?
Hmmm, binary or ascii log file format? Ascii is bulkier, but binary requires more processing. I have done both, currently I use ascii
Question - did I miss anything, and how can I improve on this?
You could "instrument" your code in many different ways, everything from start-up/shut-down events to individual machine instruction execution (using a processor emulator). Of all the possibilities, what's worth doing? Don't just do it for the sake of completeness; have a specific goal in mind. A business case if you like, with a benefit you expect to receive. E.g.:
Insight into CPU task execution times/patterns to enable optimisation (if you need to improve performance).
Insight into other systems to resolve system integration issues (e.g. what messages is your VoIP box sending and receiving when it connects to a particular peer?)
Insight into the nature of errors (for field diagnostics)
Aid in development
Aid in validation testing
I imagine that there's no grand unified theory of logging, because what you do would depend on many details:
Quantity of data
Type of data
Events
Streamed audio/video
Available storage
Storage speed
Storage capacity
Available channels to extract data
Bandwidth
Cost
Availability
Internet connected 24×7
Site visit required
Need to unlock a rusty gate, climb a ladder onto a roof, to plug in a cable, after filling out OHS documentation
Need to wait until the Antarctic winter is over and the ice sheets thaw
Random access vs linear access (e.g. if you compress it, do you need to read from the start to decompress and access some random point?)
Need to survive error conditions
Watchdog reboots
Possible data corruption
Due to failing power supply
Due to unreliable storage media
Need to survive a plane crash
As for ASCII vs binary, I usually prefer to keep the logging simple, and put any nice presentation in a PC application that decodes the data. It's usually easier to create a user-friendly presentation in PC software (written in e.g. Python) rather than in the embedded system itself.
did I miss anything, and how can I
improve on this?
Asynchronous logging.
Using multiple log files for the same process for different logging abstractions. e.g. the process' activities are logged in a normal log file. And the process' stats (periodic statistics that you might be interested in) are logged in a separate stats log file.
Hmmm, binary or ascii log file format?
Ascii is bulkier, but binary requires
more processing. I have done both,
currently I use ascii
ASCII is good. More often than not, logs are meant to be used for debugging purposes. A human readable form eases and speeds this up.
However, if your logs are used mostly to record information which is used later on for analysis and generation of reports (e.g. stats or latencies etc.) a binary format would be preferred. You can go one step ahead and use a custom format along with a db service which does index based sorting, where the index can be a tuple of time with the event type.
--
One thing which may be helpful is to have a "maybeLogger" object which will accept log records for an operation which may or may not succeed, and then either ditch those records if the operation succeeds or fails in an uninteresting way, or log them if it does something interesting. This is relatively easy to do in something like .net. In an embedded system, it can only be done really easily if the amount of stuff to be logged is small enough to fit in free RAM, but one could probably use a garbage-collection-based approach to hold stuff in flash (have one 'stream' of data in flash for new log entries, and another for ones that are confirmed to be interesting; periodically move data which is known to be good from the first stream to the second).
Here's my $0.02.
I only log when I'm having a problem and need to track down the source. Usually this has to do with a customer's environment, so I can't just attach the debugger. My solution is to enable the Telnet port and use that to print out statements as to where the program is and values of variables.
I do ASCII only because it's over telnet.
Another aspect of telnet is that it is pretty simple. It's a TCP port with text being thrown out. Very little processing other than the normal TCP headaches.
The log files are dumped as soon as I get them because I have not tried to capture and save a telnet session. I guess I could with WireShark, but I don't need a history of that session. I just need to find the problem and verify a fix.

Protocols used to talk between an embedded CPU and a PC

I am building a small device with its own CPU (AVR Mega8) that is supposed to connect to a PC. Assuming that the physical connection and passing of bytes has been accomplished, what would be the best protocol to use on top of those bytes? The computer needs to be able to set certain voltages on the device, and read back certain other voltages.
At the moment, I am thinking a completely host-driven synchronous protocol: computer send requests, the embedded CPU answers. Any other ideas?
Modbus might be what you are looking for. It was designed for exactly the type of problem you have. There is lots of code/tools out there and adherence to a standard could mean easy reuse later. It also support human readable ASCII so it is still easy to understand/test.
See FreeModBus for windows and embedded source.
There's a lot to be said for client-server architecture and synchronous protocols. Simplicity and robustness, to start. If speed isn't an issue, you might consider a compact, human-readable protocol to help with debugging. I'm thinking along the lines of modem AT commands: a "wakeup" sequence followed by a set/get command, followed by a terminator.
Host --> [V02?] // Request voltage #2
AVR --> [V02=2.34] // Reply with voltage #2
Host --> [V06=3.12] // Set voltage #6
AVR --> [V06=3.15] // Reply with voltage #6
Each side might time out if it doesn't see the closing bracket, and they'd re-synchronize on the next open bracket, which cannot appear within the message itself.
Depending on speed and reliability requirements, you might encode the commands into one or two bytes and add a checksum.
It's always a good idea to reply with the actual voltage, rather than simply echoing the command, as it saves a subsequent read operation.
Also helpful to define error messages, in case you need to debug.
My vote is for the human readable.
But if you go binary, try to put a header byte at the beginning to mark the beginning of a packet. I've always had bad luck with serial protocols getting out of sync. The header byte allows the embedded system to re-sync with the PC. Also, add a checksum at the end.
I've done stuff like this with a simple binary format
struct PacketHdr
{
char syncByte1;
char syncByte2;
char packetType;
char bytesToFollow; //-or- totalPacketSize
};
struct VoltageSet
{
struct PacketHdr;
int16 channelId;
int16 voltageLevel;
uint16 crc;
};
struct VoltageResponse
{
struct PacketHdr;
int16 data[N]; //Num channels are fixed
uint16 crc;
}
The sync bytes are less critical in a synchronous protocol than in an asynchronous one, but they still help, especially when the embedded system is first powering up, and you don't know if the first byte it gets is the middle of a message or not.
The type should be an enum that tells how to intepret the packet. Size could be inferred from type, but if you send it explicitly, then the reciever can handle unknown types without choking. You can use 'total packet size', or 'bytes to follow'; the latter can make the reciever code a little cleaner.
The CRC at the end adds more assurance that you have valid data. Sometimes I've seen the CRC in the header, which makes declaring structures easier, but putting it at the end lets you avoid an extra pass over the data when sending the message.
The sender and reciever should both have timeouts starting after the first byte of a packet is recieved, in case a byte is dropped. The PC side also needs a timeout to handle the case when the embedded system is not connected and there is no response at all.
If you are sure that both platforms use IEEE-754 floats (PC's do) and have the same endianness, then you can use floats as the data type. Otherwise it's safer to use integers, either raw A/D bits, or a preset scale (i.e. 1 bit = .001V gives a +/-32.267 V range)
Adam Liss makes a lot of great points. Simplicity and robustness should be the focus. Human readable ASCII transfers help a LOT while debugging. Great suggestions.
They may be overkill for your needs, but HDLC and/or PPP add in the concept of a data link layer, and all the benefits (and costs) that come with a data link layer. Link management, framing, checksums, sequence numbers, re-transmissions, etc... all help ensure robust communications, but add complexity, processing and code size, and may not be necessary for your particular application.
USB bus will answer all your requirements. It might be very simple usb device with only control pipe to send request to your device or you can add an interrupt pipe that will allow you to notify host about changes in your device.
There is a number of simple usb controllers that can be used, for example Cypress or Microchip.
Protocol on top of the transfer is really about your requirements. From your description it seems that simple synchronous protocol is definitely enough. What make you wander and look for additional approach? Share your doubts and we will try to help :).
If I wasn't expecting to need to do efficient binary transfers, I'd go for the terminal-style interface already suggested.
If I do want to do a binary packet format, I tend to use something loosely based on the PPP byte-asnc HDLC format, which is extremely simple and easy to send receive, basically:
Packets start and end with 0x7e
You escape a char by prefixing it with 0x7d and toggling bit 5 (i.e. xor with 0x20)
So 0x7e becomes 0x7d 0x5e
and 0x7d becomes 0x7d 0x5d
Every time you see an 0x7e then if you've got any data stored, you can process it.
I usually do host-driven synchronous stuff unless I have a very good reason to do otherwise. It's a technique which extends from simple point-point RS232 to multidrop RS422/485 without hassle - often a bonus.
As you may have already determined from all the responses not directly directing you to a protocol, that a roll your own approach to be your best choice.
So, this got me thinking and well, here are a few of my thoughts --
Given that this chip has 6 ADC channels, most likely you are using Rs-232 serial comm (a guess from your question), and of course the limited code space, defining a simple command structure will help, as Adam points out -- You may wish to keep the input processing to a minimum at the chip, so binary sounds attractive but the trade off is in ease of development AND servicing (you may have to trouble shoot a dead input 6 months from now) -- hyperterminal is a powerful debug tool -- so, that got me thinking of how to implement a simple command structure with good reliability.
A few general considerations --
keep commands the same size -- makes decoding easier.
Framing the commands and optional check sum, as Adam points out can be easily wrapped around your commands. (with small commands, a simple XOR/ADD checksum is quick and painless)
I would recommend a start up announcement to the host with the firmware version at reset - e.g., "HELLO; Firmware Version 1.00z" -- would tell the host that the target just started and what's running.
If you are primarily monitoring, you may wish to consider a "free run" mode where the target would simply cycle through the analog and digital readings -- of course, this doesn't have to be continuous, it can be spaced at 1, 5, 10 seconds, or just on command. Your micro is always listening so sending an updated value is an independent task.
Terminating each output line with a CR (or other character) makes synchronization at the host straight forward.
for example your micro could simply output the strings;
V0=3.20
V1=3.21
V2= ...
D1=0
D2=1
D3=...
and then start over --
Also, commands could be really simple --
? - Read all values -- there's not that many of them, so get them all.
X=12.34 - To set a value, the first byte is the port, then the voltage and I would recommend keeping the "=" and the "." as framing to ensure a valid packet if you forgo the checksum.
Another possibility, if your outputs are within a set range, you could prescale them. For example, if the output doesn't have to be exact, you could send something like
5=0
6=9
2=5
which would set port 5 off, port 6 to full on, and port 2 to half value -- With this approach, ascii and binary data are just about on the same footing in regards to computing/decoding resources at the micro. Or for more precision, make the output 2 bytes, e.g., 2=54 -- OR, add an xref table and the values don't even have to be linear where the data byte is an index into a look-up table ...
As I like to say; simple is usually better, unless it's not.
Hope this helps a bit.
Had another thought while re-reading; adding a "*" command could request the data wrapped with html tags and now your host app could simply redirect the output from your micro to a browser and wala, browser ready --
:)