DMA unknown behaviour - uart

I am trying to use DMA for LPUART RX and TX for a NXP board(S32K144). I am successfully able to transmit request and receive response. But i get first four bytes of my response with some junk value and then the actual data. I am not able to figure out why am I getting the junk value?
Can anyone tell me what can be the issue?

Check if the array initialization is completed before sending data

Related

Transmit and Receive message using GMSK/MSK modulation simulation

I'm trying to simulate the transmit and receive of a simple message (like "Helloworld") using an MSK/GMSK modulation but I seem to have some trouble receiving the message. I used the relatively new Symbol Sync block but I seem to be unable to get my input and output signals to align. I'm guessing I haven't set the Symbol Sync parameters properly. Does anyone have any suggestions on how to do that? (See below for flowgraph and current output)
I would like to eventually be able to implement this using USRP but am trying to simulate it first. Any help would be appreciated. Thanks.

How to handle EAGAIN case for TLS over SCTP streams using memory BIO interface

I'm using BIO memory interface to have TLS implemented over SCTP.
So at the client side, while sending out application data,
SSL_write() api encrypts the data and writes data to the associated write BIO interface.
Then the data from BIO interface is read to a output buffer using BIO_read() call and then
send out to the socket using sctp_sendmsg() api.
Similarly at the server side, while reading data from socket
sctp_recvmsg() api reads ecrypted message chunks from socket,
BIO_write() api writes it to the read BIO buffer, and
SSL_read() api decrypts the data read from the BIO.
The case i'm interested at is where at client side, steps 1 and 2 are done, and while doing 3, i get an EAGAIN from the socket. So whatever data i've read from the BIO buffer, i clean it up, and ask application to resend the data again after some time.
Now when i do this, and later when steps 1, 2 and 3 at client side goes through fine, at the server side, openssl finds it that the record that it received has got a a bad_record_mac and closes the connection.
From googling i came to know that one possibility for it to happen is if TLS packets comes out of sequence, as MAC encoding has dependency on the previous packet encoded, and, TLS needs to have the packets delivered in the same order. So when i was cleaning up the data on EAGAIN i am dropping an SSL packet and then sending a next packet which is out of order (missing clarity here) ?
Just to make sure of my hypothesis, whenever the socket returned EAGAIN, i made the code change to do an infinite wait till the socket was writeable and then everything goes fine and i dont see any bad_record_mac at server side.
Can someone help me here with this EAGAIN handling ? I can't do an infinite wait to get around the issue, is there any other way out ?
... i get an EAGAIN from the socket. So whatever data i've read from the BIO buffer, i clean it up, and ask application to resend the data again after some time.
If you get an EAGAIN on the socket you should try to send the same encrypted data later.
What you do instead is to throw the encrypted data away and ask the application to send the same plain data again. This means that these data get encrypted again. But encrypting plain data in SSL also includes a sequence number of the SSL frame and this sequence number is not the same as for the last SSL frame you throw away.
Thus, if you have thrown away the full SSL frame you are trying to send a new SSL frame with the next sequence number which does not fit the expected sequence number. If you've succeeded to send part of the previous SSL frame and thew away the rest then the new data you send will be considered part of the previous frame which means that the HMAC of the frame will not match.
Thus, don't throw away the encrypted data but try to resent these instead of letting the upper layer resent the plain data.
Select for writability.
Repeat the send.
If the send was incomplete, remove the part of the buffer that got sent and go to (1).
So whatever data i've read from the BIO buffer, i clean it up
I don't know what this means. You're sending, not receiving.
Just to make sure of my hypothesis, whenever the socket returned EAGAIN, i made the code change to do an infinite wait till the socket was writeable and then everything goes fine and i dont see any bad_record_mac at server side.
That's exactly what you should do. I can't imagine what else you could possibly have been doing instead, and your description of it doesn't make any sense.

Java and its signed bytes: Sending hex information via UDP possible?

I am currently working on an application to change my RGBWW light strips by a Java application.
Information has to be sent via UDP packages in order to be understood by the controller.
Unfortunately, the hex number 0x80 has to be sent - which is causing some problems.
Whenever I send a byte array containing only numbers fron 0x00 to 0x79 (using DataPacket and a DataSocket), I do get an UDP Package popping up on my network monitor.
As soon as I include the number 0x80 or any other higher, I see two things Happen:
1: I do not longer get only UDP protocols, but messages are displayed as RTP / RTCP most of the time
2: The method Integer.hexToString() does not display "80", but gives me a "ffffff80".
My question: Is there something I am missing when it comes to sending hex info by UDP? Or is there another way of sending it, possibly avoiding the annoyingly signed bytes?
I unfortunately did not find any information that would have significantly helped me on that issue, but I hope you can help me!
Thanks in advance!

usb hid: why should i write "null" to the control pipe in the out endpoint interrupt

Digging around with/for HID reports, I ran into a strange problem within a USB HID device. I'm implementing an HID class device and have based my program on the HID USB program supplied by Keil. Some codes have been changed in this project and it seems working fine with 32 bytes input and 32 bytes output reports. Somehow, after thousands times data transferring, the Endpoint 1 out would hang and become a bad pipe. Then I searched the google for some tips, a topic remind me that we should write a data length zero packet after sending a length of packet match what you defined in the report description. But it's not working for me. Then I write a data length zero to the control pipe after I receive a out packet and magically, it works! It would never hang after million times transferring!
Here is my question: Why does it works after writing a data length zero to a control pipe. The data transferring in the out pipe should have no relationship with the data in the control pipe. It confuses me!
If you transfer any data that is less than the expected payload size, you must send a Zero Length Packet to indicate that data has transferred.
But it depends heavily on the implementation on the host controller, and not all devices follow the specification to the point and may stall.
Source:
When do USB Hosts require a zero-length IN packet at the end of a Control Read Transfer?

MSP430 SPI to M25P64

I have a SPI for MSP430 written. If I send WRSR(01h) or RDSR(05h) to M25P64 flash.
The response I get from the Flash SPI_MISO is FFh.
So my question is "Is the response I have obtained is it right?"
How do I come to an understanding that handshaking between my SPI and Flash is correct?
Thanks
AK
Is the response I have obtained is it right?
The response is wrong. 30 seconds on Google and in the datasheet will tell you that. Things to check (since you have not provided any information):
How do I come to an understanding that handshaking between my SPI and Flash is correct?
Is this a new piece of SPI code? If so have you checked with an oscilloscope to see what you send out (clock and MOSI) is what you expect and matches what the datasheet says the device expects? It's the definitive way to be sure.
Does your SPI code work with any other devices?
Are your IO pins configured correctly on the MSP430?
Have you got the SPI module configured correctly for phase and polarity?
Did you forget to assert the chip select line?
What about HOLD?
Did you remember to send a dummy byte after the RDSR command so that the device would send the status register value?
Do you see a response from the device on an oscilloscope? Does the MSP430 read that value or a different one?
You are sometimes better first of all trying to read the device ID rather than the status register for a new piece of code. The reason for that is the device ID will never change, whereas the status register might change (although that depends on the device).