I've got two test programs (A & B)that are nearly identical, that use the same boost asio UDP async code.
Here is the receive call:
_mSocket.async_receive_from(
boost::asio::buffer(_mRecvBuffer), _mReceiveEndpoint,
boost::bind(&UdpConnection::handle_receive, this,_mReceiveEndpoint,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
// _mReceiveEndpoint is known and good. the buffer is good too.
// here's the handler
void handle_receive(const udp::endpoint recvFromEP, const boost::system::error_code& error,std::size_t bytesRecv/*bytes_transferred*/)
{
boost::shared_ptr<std::string> message(new std::string(_mRecvBuffer.c_array(),bytesRecv));
if (!error)
{
doSomeThingGood();
}
else {
cerr << "UDP Recv error : " << error << endl;
}
}
So here's what happens, all on localhost.
If I start program 'A' first, then program 'B', 'A' gives a UDP Recv error : server:10061.
Program 'A' continues to send just fine and 'B' receives just fine.
You can swap 'A' and 'B' in the above sentence and it is still true.
IF I attempt a reset of the bad read condition by calling mSocket.async_receive_from again, I get error 10054.
I've looked these errors up on the web..... not very helpful.
Anybody have any ideas as to what these mean, and how I can recover inside the program if this condition occurs? Is there a way to reset the socket?
Sanity check.... can both programs operate on loopback with only two ports?
A send = 20000, A receive = 20001
B send = 20001, B receive = 20000
TL;DR
It appears as though if I try to listen before I'm sending, I get an error & I can't recover from it. If I listen after sending, I'm fine.
-- EDIT - It appears that McAfee host intrusion prevention is doing something nasty to me.... If I debug in VS2010, I get stuck in their DLL.
Thanks
In my receive handler, I wasn't calling _mSocket.async_receive_from() again.... I just printed the error and exited.
Silly mistake, just posting here in case it helps anyone else.
Also for a similar problem with a different resolution:
_mSocket.set_option(boost::asio::socket_base::reuse_address(true));
helps if you have multiple listeners.
Several sources explain that you should use SO_REUSEADDR on windows. But none mention that it is possible to receive UDP message with and without binding the socket.
The code below binds the socket to a local listen_endpoint, that is essential, because without that you can and will still receive your UDP messages, but by default your will have exclusive ownership of the port.
However if you set reuse_address(true) on the socket (or on the acceptor when using TCP), and bind the socket afterwards, it will enable multiple applications, or multiple instances of your own application to do it again, and everyone will receive all messages.
// Create the socket so that multiple may be bound to the same address.
boost::asio::ip::udp::endpoint listen_endpoint(
listen_address, multicast_port);
// == important part ==
socket_.open(listen_endpoint.protocol());
socket_.set_option(boost::asio::ip::udp::socket::reuse_address(true));
socket_.bind(listen_endpoint);
// == important part ==
boost::array<char, 2000> recvBuffer;
socket_.async_receive_from(boost::asio::buffer(recvBuffer), m_remote_endpoint,
boost::bind(&SocketReader::ReceiveUDPMessage, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)
source:
http://www.boost.org/doc/libs/1_45_0/doc/html/boost_asio/example/multicast/receiver.cpp
Related
We are working with NRF52840 dongles and want to be able to have them relay data over an OpenThread mesh network through UDP automatically. We have found within the OpenThread API a solid Udp.h library with all the Udp functions we need to create code that runs on the dongles from the main.c.
Below is our code that should broadcast the message: "Hallo" to all nodes that have an open socket on port 1994.
We have read that the ipv6 address ff03::1 is reserved for multicast UDP broadcasting and it works perfectly when manually performed with the CLI udp commands.
CLI: Udp open, udp send ff03::1 1994 Hallo
With all the nodes that have udp open, udp bind :: 1994, receiving the Hallo message from the sending node.
We are trying to recreate this in the main.c of our nodes so that we can provide the nodes with some intelligence of their own.
This piece of code is run once when the push button on the dongle is pressed.
The code compiles perfectly and we have tested the functions that have a return with the RGB led (green OK, red not) to confirm that there weren't any errors produced (sadly not all functions return a no_error value)
void udpSend(){
const char *buf = "Hallo";
otMessageInfo messageInfo;
otInstance *myInstance;
myInstance = thread_ot_instance_get();
otUdpSocket mySocket;
memset(&messageInfo, 0, sizeof(messageInfo));
// messageInfo.mPeerAddr = otIp6GetUnicastAddresses(myInstance)->mNext->mNext->mAddress;
otIp6AddressFromString("ff03::1", &messageInfo.mPeerAddr);
messageInfo.mPeerPort = 1994;
messageInfo.mInterfaceId = OT_NETIF_INTERFACE_ID_THREAD;
otUdpOpen(myInstance, &mySocket, NULL, NULL);
otMessage *test_Message = otUdpNewMessage(myInstance, NULL);
otMessageSetLength(test_Message, sizeof(buf));
if (otMessageAppend(test_Message, &buf, sizeof(buf)) == OT_ERROR_NONE){
nrf_gpio_pin_write(LED2_G, 0);
}
else{
nrf_gpio_pin_write(LED2_R, 0);
}
otUdpSend(&mySocket, test_Message, &messageInfo);
otCliUartOutputFormat("Done.\0");
otUdpClose(&mySocket);
}
Now, we aren't exactly experts, so we are not sure why this isn't working as we had a lot of trouble figuring out how everything is called/initialised.
We hope to create a way to send and receive data through UDP through the code, so that they can operate autonomously.
We would really appreciate it if someone could assist us with our project!
Thanks!
Jonathan
There are a few errors in your code:
Remove the call to otMessageSetLength(). The message length is automatically increased as part of otMessageAppend().
The call to otMessageAppend() should be: otMessageAppend(test_message, buf, (uint16_t)strlen(buf)).
Removed the & before buf.
Replaced sizeof() with strlen().
Couple other things you should consider:
After calling otUdpNewMessage(), if any following call returns an error, make sure to call otMessageFree() on the message buffer.
Custody is only given to OpenThread after a successful call to otUdpSend().
Do not call udpSend() from interrupt context.
OpenThread library was designed to assume a single thread of execution.
Hope that helps.
I have an application that I have started developing that monitors websocket messages from all clients connected to the websocket server by relaying all messages received from the server to this application.
Problem
When I run my program (In visual studio I hit Start), it builds and starts up perfectly, and does most of the functionality the same everytime. However, I have a common occurance of a portion of code that will not run the same. Below is the small snippet of that code.
msg = "set name monitor"
SendMessage2(socket, msg, msg.Length)
msg = "set monitor 1"
SendMessage2(socket, msg, msg.Length)
Console.WriteLine("We are after our second SendMessage2 function")
I know that the two calls to SendMessage2 are always executed because visual studio's debug console will output the following
We are at the end of the SendMessage2 Sub
We are at the end of the SendMessage2 Sub
We are after our second SendMessage2 function
I also know when it executes correctly because my websocket server will either output one of the two blocks
Output when app runs correctly
Client 4 connected
New thread created
Connection received. Parsing headers.
Message from socket #4: "set name monitor"
Message from socket #4: "set monitor 1"
Output when app runs incorrectly
Client 4 connected
New thread created
Connection received. Parsing headers.
Message from socket #4: "set name monitor"
Notice how the second output is missing the second message from the monitor application.
What have I tried
Using a string variable to call the functions
Calling the functions using static string arguments (not using the variable msg)
SyncLocking the functions separately
SyncLocking inside the SendMessage2 function
Reordering the functions (swapping the strings to change behavior)
TL;DR
Why is it that even when I do not change my code, my program will execute two separate ways? Am I doing something incorrectly when calling my SendMessage2 Sub?
I am all out of ideas. I am willing to try any recommendation to fix this problem.
All code can be found on GitHub here
So I figured it out.
It is actually not the VB application that is messing up. Nor was my server. While debugging I was looking at the number of bytes received by my server and I noticed the following:
Client 4 connected
New thread created
Connection received. Parsing headers.
bytes read: 25
Message from socket #4: "set name monitor"
bytes read: 22
Message from socket #4: "set monitor 1"
Ok great we have 25 bytes from set name monitor and 22 bytes from set monitor 1
Client 4 connected
New thread created
Connection received. Parsing headers.
bytes read: 47
Message from socket #4: "set name monitor"
And boom. Both programs were doing their jobs, sending the correct number of bytes every time and reading the correct number. However, the VB application is sending them so quickly back to back that my server was reading all 47 bytes at a time instead of the separate 25 and 22 bytes.
Solution
I solved this problem by implementing a secondary buffer in my server to store off all bytes after the first message should multiple messages by group like this. Now I check if my secondaryBuffer is empty before reading in new bytes.
Here is a portion of the code used to solve the problem
/*Byte Check*/
for (j=0; j < bytes; j++) {
if (j == 0)
continue;
if (readBuffer[j] == '\x81' && readBuffer[j-1] == '\x00' && readBuffer[j-2] == '\x00') {
secondaryBytes = bytes - j;
printf("Potential second message attached to this message\nCopying it to the secondary buffer.\n");
memcpy(secondaryBuffer, readBuffer + j, secondaryBytes);
break;
}//END IF
}//END FOR LOOP
/**/
I'm pulling my hair out with this one. A month or so ago, I was able to put together a proof-of-concept WebRTC demo, using some sample code from the good folks at SignalR. The demo is located here, the source for it is here, and it does what it's supposed to do.
But when I took that code and moved it into our actual application, I haven't been able to get it to work. Of course the code had to be changed significantly - different backends, different set of frameworks and supporting code, supporting multiple simultaneous connections, that sort of thing - but the core logic is very similar. But I can't get it to work.
I've put together a sample app here that demonstrates the problem:
https://bitbucket.org/smithkl42/signalr.webrtc
The core WebRTC logic is all in this TypeScript file:
https://bitbucket.org/smithkl42/signalr.webrtc/src/tip/SignalR.WebRTC/Scripts/Media/WebRTC.ts?at=default
It's several hundred lines long, so I won't bother posting it here, but you can see it by clicking on the link above.
When it runs, it produces output like this:
12:17:58.531 WebRTCController.call(): Calling 7d9e0d39-5047-4afe-86e5-e6e01b9f5955 when preparations have finished
12:17:58.533 WebRTCController.prepareForCall(): Preparing for call: localSessionId='39d2df53-6854-415a-8748-b5230eda2eb1'; remoteSessionId='7d9e0d39-5047-4afe-86e5-e6e01b9f5955'
12:18:0.139 Object.(): The user has granted media device access, so proceeding to prepare for call
12:18:0.141 Connection.createPeerConnection(): Creating peer connection; using stunServer stun:stun1.l.google.com:19302
12:18:0.144 (): Preparations finished. Creating and sending JSEP offer. Util.js:21
12:18:0.272 Connection.handleIceCandidate(): STUN server has found an ICE candidate (event.type='icecandidate').
12:18:0.282 Connection.handleIceCandidate(): STUN server has found an ICE candidate (event.type='icecandidate').
(More like that)
12:18:0.655 WebRTCController.handleJsepAnswer(): Handling JsepAnswer from 7d9e0d39-5047-4afe-86e5-e6e01b9f5955
12:18:0.694 Object.(): Sending ICE candidate to the remote machine: {"sdpMLineIndex":0,"sdpMid":"audio","candidate":"a=candidate:2999745851 1 udp 2113937151 192.168.56.1 62978 typ host generation 0\r\n"}
12:18:0.706 Object.(): Sending ICE candidate to the remote machine: {"sdpMLineIndex":0,"sdpMid":"audio","candidate":"a=candidate:2999745851 2 udp 2113937151 192.168.56.1 62978 typ host generation 0\r\n"}
(More like that)
But then it never connects, i.e., the video from the other side never starts playing. At the signaling layer, I can tell by the logs and by stepping through the code that the first browser is sending a JSEP offer; the second browser is receiving it, storing it and sending back an appropriate JSEP answer; and the first machine is storing that answer. Each peerConnection is then finding the ICE candidates and sending them to the remote machine; and each peerConnection is receiving and apparently trying those ICE candidates; and the peerConnections are even raising the onaddstream event. But the video never starts playing.
The state of the peerConnection object all the way through looks like this:
(iceGatheringState=new; iceState=starting; readyState=active)
The frustrating bit is that every so often, maybe one time out of 20, it does work, i.e., both videos show up. So I'm not doing everything wrong. It sounds like a timing issue of some sort - but I can't figure out what it is. And so far as I can tell, there's not much in the WebRTC objects (specifically RTCPeerConnection) to tell you what's going wrong.
I hate to ask anybody else to do my troubleshooting for me, but... well, I'm running out of options. Does anybody else see anything I'm doing obviously wrong?
Update 2012-12-19: I'm making some progress. I realized I was calling peerConnection.setLocalDescription() synchronously, i.e., without specifying callbacks. So now I've got some lines of code that look like this:
// Answer the call by sending a JsepAnswer message.
connection.peerConnection.createAnswer(
answer => {
connection.peerConnection.setLocalDescription(answer, () => {
var signalState: mData.SignalState = {
FromSessionId: connection.localSessionId,
ToSessionId: connection.remoteSessionId,
Message: JSON.stringify(answer)
};
me.roomHub.server.jsepAnswer(signalState);
mUtil.log("Sent JSEP answer: " + signalState.Message);
connection.readyForIceCandidates.resolve();
},
error => {
mUtil.error("Error setting local description from created answer: " + error + "; answer=" + JSON.stringify(answer));
});
},
error => {
mUtil.error("Error creating answer: " + error);
}, me.mediaConstraints);
And the setLocalDescription() error callback is showing this error:
16:14:42.439 WebRTCController.handleJsepOffer(): Error setting local description from created answer: SetLocalDescription failed.; answer={"sdp":"v=0\r\no=- 439659381 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=group:BUNDLE audio video\r\na=msid-semantic: WMS u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf\r\nm=audio 1 RTP/SAVPF 103 104 111 0 8 107 106 105 13 126\r\nc=IN IP4 0.0.0.0\r\na=rtcp:1 IN IP4 0.0.0.0\r\na=ice-ufrag:vOKflTJ56gV0R9i0\r\na=ice-pwd:9nuXPMDvQ2mZATFCQyEzPRQz\r\na=sendrecv\r\na=mid:audio\r\na=rtcp-mux\r\na=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:m9q9pmLgLuFnfFC09KXKW5p8TjsKk+VdqX0OWv77\r\na=rtpmap:103 ISAC/16000\r\na=rtpmap:104 ISAC/32000\r\na=rtpmap:111 opus/48000/2\r\na=rtpmap:0 PCMU/8000\r\na=rtpmap:8 PCMA/8000\r\na=rtpmap:107 CN/48000\r\na=rtpmap:106 CN/32000\r\na=rtpmap:105 CN/16000\r\na=rtpmap:13 CN/8000\r\na=rtpmap:126 telephone-event/8000\r\na=ssrc:548068416 cname:IXg8QRisWrd7+7f8\r\na=ssrc:548068416 msid:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf a0\r\na=ssrc:548068416 mslabel:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf\r\na=ssrc:548068416 label:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjfa0\r\nm=video 1 RTP/SAVPF 100 116 117\r\nc=IN IP4 0.0.0.0\r\na=rtcp:1 IN IP4 0.0.0.0\r\na=ice-ufrag:vOKflTJ56gV0R9i0\r\na=ice-pwd:9nuXPMDvQ2mZATFCQyEzPRQz\r\na=sendrecv\r\na=mid:video\r\na=rtcp-mux\r\na=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:m9q9pmLgLuFnfFC09KXKW5p8TjsKk+VdqX0OWv77\r\na=rtpmap:100 VP8/90000\r\na=rtpmap:116 red/90000\r\na=rtpmap:117 ulpfec/90000\r\na=ssrc:1460425980 cname:IXg8QRisWrd7+7f8\r\na=ssrc:1460425980 msid:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf v0\r\na=ssrc:1460425980 mslabel:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjf\r\na=ssrc:1460425980 label:u9fhVrWeLLweqb5ubLkw61Ijsh6BM6vZLhjfv0\r\n","type":"answer"}
Now I just need to figure out why that particular SDP - which comes straight from the createAnswer() method - is failing.
Update 2012-12-20: I've created an online demonstration of the problem here: http://srdemo.alanta.com/. I've also turned on Chrome debug logging, with the result that I see a bunch of errors that look like this:
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
[6584:7308:1220/091356:ERROR:rtc_peer_connection_handler.cc(84)] Native session description is null.
Not sure what relationship they have to my problem, but I'm continuing to look into it.
*Edit 2012-12-20: I've managed (I think) to narrow the problem down. See this question for more precise details.
Figured it out. Turns out that SignalR 1.0 RC1 has a bug in it that changes any "+" in a string into a space. So lines in the SDP that looked like this:
a=ice-pwd:qZFVvgfnSso1b8UV1SUDd2+z
Were getting changed into this:
a=ice-pwd:qZFVvgfnSso1b8UV1SUDd2 z
But because not every SDP had a "+" in it on a critical line, sometimes it would work. Everything explained.
The bug has been reported to the good folks working on SignalR (see https://github.com/SignalR/SignalR/issues/1194), and in the meantime, a simple encodeURIComponent() and decodeURIComponent() around the strings in question fixed it.
I'm writing some code which purpose is to read values send by an ECG.
The ECG sends values read by it's sensors through a serial connection and (as a start) all the program has to do is read the input and display it in a text view.
However I have hit a wall and can't seem to solve the following two problems:
I get the following error a lot of the times I try to connect with the ECG: Unable to open /dev/tty.usbserial.A700eLwM - : Resource busy.
The port is not being used by any other applications but the ECG is sending numbers.
Can I somehow tell the OS that whatever is happening and whoever is using that port I want to have full control of the port?
My code is as follows:
fd = open("/dev/tty.usbserial-A700eLwM", O_RDWR | O_NOCTTY | O_NDELAY);
[textView insertText:[NSString stringWithFormat:#"Port status: %f\n", fd]];
if (fd == -1)
{
/*
* Could not open the port.
*/
perror("open_port: Unable to open /dev/tty.usbserial.A700eLwM - ");
}
else {
fcntl(fd, F_SETFL, 0);
}
My second problem is that I don't quite understand how I can buffer the reading into a string or integer variable and send it to the text view.
Any help will be appreciated
Thanks in advance
The most likely reason is that you've activated the serial port as a network device in Network Preferences. If it's listed there, select it and use the cogwheel menu item "mark as inactive".
For your second problem there's a lot of other matching questions on the site, search for it.
I am using an STM32F105 microcontroller with the STM32_USB-FS-Device_Lib_V3.2.1 USB library and have adapted the VCP example for our purposes (integration with RTOS and serial API).
The problem is that if the USB cable is attached, but the port is not open on the Windows host, after a few minutes the device ends up permanently re-entering the USB ISR until the port is opened and then it all starts working normally.
I have instrumented interrupt handler and can see that when the fault occurs, the ISR handler exits and then immediately re-enters. This occurs because on exit from the interrupt the IEPINT flag in OTG_FS_GINTSTS is not clear. The OTG_FS_DAINT at this time contains 0x00000002 (IEPINT1 set), while DIEPINT1 has 0x00000080 (TXFE). The line in OTGD_FS_Handle_InEP_ISR() that clears TXFE is called, but the bit either does not clear or becomes immediately reasserted. When the COM port on the host is reopened, the state of OTG_FS_GINTSTS and OTG_FS_DAINT at the end of the interrupt is always zero, and further interrupts occur at the normal rate. Note that the problem only occurs if data is being output but the host has no port open. If either the port is open or no data is output, the system runs indefinitely. I believe that the more data that is output the sooner the problem occurs, but that is anecdotal at present.
The VCP code has a state variable that takes the following enumerated values:
UNCONNECTED,
ATTACHED,
POWERED,
SUSPENDED,
ADDRESSED,
CONFIGURED
and we use the CONFIGURED state to determine whether to put data into the driver buffer for sending. However the CONFIGURED state is set when the cable is attached not when the host has the port open and an application connected. I see that when Windows does open the port, there is a burst of interrupts so it seems that some communication occurs on this event; I wonder if it is possible therefore to detect whether the host has the port open,.
I need one of two things perhaps:
To prevent the USB code from getting stuck in the ISR in the first instance
To determine whether the host has the port open from the device end, and only push data for sending when open.
Part (1) - preventing the interrupt lock-up - was facilitated by a USB library bug fix from ST support; it was not correctly clearing the TxEmpty interrupt.
After some research and assistance from ST Support, I have determined a solution to part (2) - detecting whether the host port is open. Conventionally, when a port is opened the DTR modem control line is asserted. This information is passed to a CDC class device, so I can use this to achieve my aim. It is possible for an application to change the behaviour of DTR, but this should not happen in any of the client applications that are likely to connect to this device in this case. However there is a back-up plan that implicitly assumes the port to be open if the line-coding (baud, framing) are set. In this case there is no means of detecting closure but at least it will not prevent an unconventional application from working with my device, even if it then causes it to crash when it disconnects.
Regarding ST's VCP example code specifically I have made the following changes to usb_prop.c:
1) Added the following function:
#include <stdbool.h>
static bool host_port_open = false ;
bool Virtual_Com_Port_IsHostPortOpen()
{
return bDeviceState == CONFIGURED && host_port_open ;
}
2) Modified Virtual_Com_Port_NoData_Setup() handling of SET_CONTROL_LINE_STATE thus:
else if (RequestNo == SET_CONTROL_LINE_STATE)
{
// Test DTR state to determine if host port is open
host_port_open = (pInformation->USBwValues.bw.bb0 & 0x01) != 0 ;
return USB_SUCCESS;
}
3) To allow use with applications that do not operate DTR conventionally I have also modified Virtual_Com_Port_Data_Setup() handling of SET_LINE_CODING thus:
else if (RequestNo == SET_LINE_CODING)
{
if (Type_Recipient == (CLASS_REQUEST | INTERFACE_RECIPIENT))
{
CopyRoutine = Virtual_Com_Port_SetLineCoding;
// If line coding is set the port is implicitly open
// regardless of host's DTR control. Note: if this is
// the only indicator of port open, there will be no indication
// of closure, but this will at least allow applications that
// do not assert DTR to connect.
host_port_open = true ;
}
Request = SET_LINE_CODING;
}
I found another solution by adopting CDC_Transmit_FS.
It can now be used as output for printf by overwriting _write function.
First it checks the connection state, then it tries to send over USB endport in a busy loop, which repeats sending if USB is busy.
I found out if dev_state is not USBD_STATE_CONFIGURED the USB plug is disconnected. If the plug is connected but no VCP port is open via PuTTY or termite, the second check fails.
This implementation works fine for me for RTOS and CubeMX HAL application. The busy loop is not blocking low priority threads anymore.
uint8_t CDC_Transmit_FS(uint8_t* Buf, uint16_t Len)
{
uint8_t result = USBD_OK;
// Check if USB interface is online and VCP connection is open.
// prior to send:
if ((hUsbDevice_0->dev_state != USBD_STATE_CONFIGURED)
|| (hUsbDevice_0->ep0_state == USBD_EP0_STATUS_IN))
{
// The physical connection fails.
// Or: The phycical connection is open, but no VCP link up.
result = USBD_FAIL;
}
else
{
USBD_CDC_SetTxBuffer(hUsbDevice_0, Buf, Len);
// Busy wait if USB is busy or exit on success or disconnection happens
while(1)
{
//Check if USB went offline while retrying
if ((hUsbDevice_0->dev_state != USBD_STATE_CONFIGURED)
|| (hUsbDevice_0->ep0_state == USBD_EP0_STATUS_IN))
{
result = USBD_FAIL;
break;
}
// Try send
result = USBD_CDC_TransmitPacket(hUsbDevice_0);
if(result == USBD_OK)
{
break;
}
else if(result == USBD_BUSY)
{
// Retry until USB device free.
}
else
{
// Any other failure
result = USBD_FAIL;
break;
}
}
}
return result;
}
CDC_Transmit_FS is used by _write:
// This function is used by printf and puts.
int _write(int file, char *ptr, int len)
{
(void) file; // Ignore file descriptor
uint8_t result;
result = CDC_Transmit_FS((uint8_t*)ptr, len);
if(result == USBD_OK)
{
return (int)len;
}
else
{
return EOF;
}
}
Regards
Bernhard
After so much searching and a kind of reverse engineering I finally found the method for detecting the open terminal and also it's termination. I found that in the CDC class there is three Data nodes , one is a control node and the other two are data In and data Out nodes.Now when you open a terminal a code is sent to the control node and also when you close it. all we need to do is to get those codes and by them start and stop our data transmission tasks. the code that is sent is respectively 0x21 and 0x22 for opening and closing the terminal.In the usb_cdc_if.c there is a function that receive and interpret those codes (there is a switch case and the variable cmd is the code we are talking about).that function is CDC_Control_FS . Here we are, Now all we need to do is to expand that function so that it interpret the 0x22 and 0x21 . there you are , now you know in your application whether the port is open or not.
I need one of two things perhaps:
To prevent the USB code from getting stuck in the ISR in the first instance
To determine whether the host has the port open from the device end, and only push data for sending when open.
You should attempt to do option 1 instead of 2. On Windows and Linux, it is possible to open a COM port and use it without setting the control signals, which means there is no fool-proof, cross-platform way to detect that the COM port is open.
A well programmed device will not let itself stop functioning just because the USB host stopped polling for data; this is a normal thing that should be handled properly. For example, you might change your code so that you only queue up data to be sent to the USB host if there is buffer space available for the endpoint. If there is no free buffer space, you might have some special error handling code.
I have the same requirement to detect PC port open/close. I have seen it implemented it as follows:
Open detected by:
DTR asserted
CDC bulk transfer
Close detected by:
DTR deasserted
USB "unplugged", sleep etc
This seems to be working reasonably well, although more thorough testing will be needed to confirm it works robustly.
Disclaimer: I use code generated by Cube, and as a result it works with HAL drivers. Solutions, proposed here before, don't work for me, so I have found one. It is not good, but works for some purposes.
One of indirect sign of not opened port arises when you try to transmit packet by CDC_Transmit_FS, and then wait till TxState is set to 0. If port is not opened it never happens. So my solution is to fix some timeout:
uint16_t count = 0;
USBD_CDC_HandleTypeDef *hcdc =
(USBD_CDC_HandleTypeDef*) USBD_Device.pClassData;
while (hcdc->TxState != 0) {
if (++count > BUSY_TIMEOUT) { //number of cycles to wait till it makes decision
//here it's clear that port is not opened
}
}
The problem is also, that if one tries to open port, after device has tried to send a packet, it cant be done. Therefore whole routine I use:
uint8_t waitForTransferCompletion(void) {
uint16_t count = 0;
USBD_CDC_HandleTypeDef *hcdc =
(USBD_CDC_HandleTypeDef*) USBD_Device.pClassData;
while (hcdc->TxState != 0) {
if (++count > BUSY_TIMEOUT) { //number of cycles to wait till it makes decision
USBD_Stop(&USBD_Device); // stop and
MX_USB_DEVICE_Init(); // init device again
HAL_Delay(RESET_DELAY); // give a chance to open port
return USBD_FAIL; // return fail, to send last packet again
}
}
return USBD_OK;
}
The question is, how big timeout has to be, not to interrupt transmission while port is opened. I set BUSY_TIMEOUT to 3000, and now it works.
I fixed it by checking of a variable hUsbDeviceFS.ep0_state.
It equal 5 if connected and 4 if do not connected or was disconnected.
But. There are some issue in the HAL. It equal 5 when program started.
Next steps fixed it at the begin of a program
/* USER CODE BEGIN 2 */
HAL_Delay(500);
hUsbDeviceFS.ep0_state = 4;
...
I do not have any wishes to learn the HAL - I hope this post will be seeing by developers and they will fix the HAL.
It helped me to fix my issue.