Resume two TLS sessions with Mbed TLS - ssl

I'm updating an IOT device that's been up and running for several years. TLS is performed using mbedTLS 2.9.0, and the application currently serializes all tasks which require TLS so that only one secured connection is established at a time. The update is adding FTPS to its FTP client, and this requires secure connections for both command and data ports.
Filezilla server 1.4.1 complains "TLS session of data connection not resumed" when attempting to secure the data port. This issue is mentioned in the Stack Overflow support topic "How to establish a FTPS data connection to a FileZilla Server 1.2.0." That topic references Filezilla forum topic "forum.filezilla-project.org/viewtopic.php?t=54027," which states a prior session must be resumed.
In scanning through mbedTls header files I noted functions mbedtls_ssl_set_session() and mbedtls_ssl_get_session() which appear to be relevant. If they are relevant, I added a call to mbedtls_ssl_get_session() prior to starting TLS for the data connection, but that call attempts to free() uninitialized objects in the mbedtls_ssl_context.
Since trustedfirmware.org took over the MbedTls project, the prior forum, documentation, support articles, et.al, have disappeared, and their mailing list is pretty much dead. So, can someone please point me to some doc. describing how this is done - or describe what has to be done?
Thx!!

that call attempts to free() uninitialized objects in the mbedtls_ssl_context
There shouldn't be any uninitialized objects ever. So I guess the problem is that you forgot to initialize the context. Whenever you have an mbedtls_xxx structure, always call the corresponding mbedtls_xxx_init() function before doing anything else.
With two connections and session loading, the code to set up the connections would look something like this.
mbedtls_ssl_config ssl_config;
mbedtls_ssl_config_init(&ssl_config);
mbedtls_ssl_context ssl_context_control, ssl_context_data;
mbedtls_ssl_init(&ssl_context_control);
mbedtls_ssl_init(&ssl_context_data);
int ret;
// prepare ssl_config here
ret = mbedtls_ssl_setup(&ssl_context_control, &ssl_config);
if (ret != 0) goto error;
ret = mbedtls_ssl_setup(&ssl_context_data, &ssl_config);
if (ret != 0) goto error;
if (have_saved_sessions) {
mbedtls_ssl_session ssl_session_control, ssl_session_data;
mbedtls_ssl_session_init(&ssl_session_control);
mbedtls_ssl_session_init(&ssl_session_data);
ret = mbedtls_ssl_session_load(&ssl_session_control, ...);
if (ret != 0) goto session_done;
ret = mbedtls_ssl_session_load(&ssl_session_data, ...);
if (ret != 0) goto session_done;
ret = mbedtls_ssl_set_session(&ssl_context_control, &ssl_session_control);
if (ret != 0) goto session_done;
ret = mbedtls_ssl_set_session(&ssl_context_data, &ssl_session_data);
if (ret != 0) goto session_done;
session_done:
mbedtls_ssl_session_free(&ssl_session_control);
mbedtls_ssl_session_free(&ssl_session_data);
if (ret != 0) goto error;
}
// the contexts are ready for use here
For cleanup, the rule is that each mbedtls_xxx_init() function has a corresponding mbedtls_xxx_free() function. Note that the name free is misleading: these functions clean up the object and free embedded resources (e.g. malloc'ed sub-structures) but they do not free the argument in the sense of heap free(). If you malloc or calloc an mbedtls_xxx structure, you need to call mbedtls_xxx_free() and then call free(). If the mbedtls_xxx structure is on the stack or global, just call mbedtls_xxx_free().
Regarding documentation, the online documentation of Mbed TLS froze at an old version. And even when the version was current, the online documentation is for one specific compile-time configuration with all features enabled. Typeset the documentation for your version and configuration by running make apidoc in the mbedtls source tree. You need Doxygen (a pretty standard documentation typesetting tool, so it may already be available in your development environment).
TLS is performed using mbedTLS 2.9.0
Beware that this is a very old version which has many unfixed security issues. You should upgrade to 2.28.1 (and keep updating through the 2.28.x long-time support series).

Related

Can an end point is connected to more than one router in a NoC topology in gem5 garnet3.0?

I am running gem5 version 22.0.0.2. I operate Garnet in a standalone manner in conjunction with the Garnet Synthetic Traffic injector. I want to emulate a routerless NoC so I guess I need to connect an end point (e.g, Cores, Caches, Directories) to more than one "local" router. I just use a python configuration to configure the topology. But when I do this, there is a runtime error:
build/NULL/mem/ruby/network/garnet/GarnetNetwork.cc:125: info: Garnet version 3.0
build/NULL/base/stats/group.cc:121: panic: panic condition statGroups.find(name) != statGroups.end() occurred: Stats of the same group share the same name `power_state`.
Memory Usage: 692360 KBytes
Program aborted at tick 0
Here is a description from the gem5 documentation: "Each network interface is connected to one or more “local” routers which is could be connected through an “External” link." Here is the link:https://www.gem5.org/documentation/general_docs/ruby/heterogarnet/
Here is the constructor of Stats::Group
Group(Group *parent, const char *name = nullptr)
Here is a description from the gem5 documentation: "there are special cases where the parent group may be null. One such special case is SimObjects where the Python code performs late binding of the group parent."
Here is the link:https://www.gem5.org/documentation/general_docs/statistics/api.
I guess the error may be related to this, but I don't know the exact reason.
Any help would be appreciated.
Thank you.

Set minimum idle connections in SQL pool

Background: I want to reduce the response time when working with SQL databases in my Go applications.
Golang provides an SQL package with a connection pool.
It includes the following configuration options:
func (db *DB) SetConnMaxIdleTime(d time.Duration)
func (db *DB) SetConnMaxLifetime(d time.Duration)
func (db *DB) SetMaxIdleConns(n int)
func (db *DB) SetMaxOpenConns(n int)
However, there is nothing like SetMinIdleConns to ensure that there are always some open connections that can be used when a request comes in.
As a result, my application has good response times under load, but the first request after some idle time always has some delay because it needs to open a new connection to the database.
Question: Is there a way to fix this with the Go standard library or is there some alternative connection pooling library for Go with this feature?
Workarounds and already tried:
I tried setting ConnMaxIdleTime and ConnMaxLifetime to very high values, but then the SQL server closes them and I get even higher delays or even errors on the first call after a long idle time.
Obviously, I could create a background task that periodically uses the database. However, this does not seem like a clean solution.
I am considering porting a connection pool library from another language to Go.
The pgx/pgxpool library provides a connection pool for Postgres, which allows to configure the minimum number of connections:
connConf, err := pgxpool.ParseConfig(connStr)
// ... (error handling)
connConf.MaxConns = 50
// Set minimum number of connections:
connConf.MinConns = 10
pool, err := pgxpool.ConnectConfig(context.TODO(), connConf)

Openthread CLI UDP communication from main.c (NRF52840)

We are working with NRF52840 dongles and want to be able to have them relay data over an OpenThread mesh network through UDP automatically. We have found within the OpenThread API a solid Udp.h library with all the Udp functions we need to create code that runs on the dongles from the main.c.
Below is our code that should broadcast the message: "Hallo" to all nodes that have an open socket on port 1994.
We have read that the ipv6 address ff03::1 is reserved for multicast UDP broadcasting and it works perfectly when manually performed with the CLI udp commands.
CLI: Udp open, udp send ff03::1 1994 Hallo
With all the nodes that have udp open, udp bind :: 1994, receiving the Hallo message from the sending node.
We are trying to recreate this in the main.c of our nodes so that we can provide the nodes with some intelligence of their own.
This piece of code is run once when the push button on the dongle is pressed.
The code compiles perfectly and we have tested the functions that have a return with the RGB led (green OK, red not) to confirm that there weren't any errors produced (sadly not all functions return a no_error value)
void udpSend(){
const char *buf = "Hallo";
otMessageInfo messageInfo;
otInstance *myInstance;
myInstance = thread_ot_instance_get();
otUdpSocket mySocket;
memset(&messageInfo, 0, sizeof(messageInfo));
// messageInfo.mPeerAddr = otIp6GetUnicastAddresses(myInstance)->mNext->mNext->mAddress;
otIp6AddressFromString("ff03::1", &messageInfo.mPeerAddr);
messageInfo.mPeerPort = 1994;
messageInfo.mInterfaceId = OT_NETIF_INTERFACE_ID_THREAD;
otUdpOpen(myInstance, &mySocket, NULL, NULL);
otMessage *test_Message = otUdpNewMessage(myInstance, NULL);
otMessageSetLength(test_Message, sizeof(buf));
if (otMessageAppend(test_Message, &buf, sizeof(buf)) == OT_ERROR_NONE){
nrf_gpio_pin_write(LED2_G, 0);
}
else{
nrf_gpio_pin_write(LED2_R, 0);
}
otUdpSend(&mySocket, test_Message, &messageInfo);
otCliUartOutputFormat("Done.\0");
otUdpClose(&mySocket);
}
Now, we aren't exactly experts, so we are not sure why this isn't working as we had a lot of trouble figuring out how everything is called/initialised.
We hope to create a way to send and receive data through UDP through the code, so that they can operate autonomously.
We would really appreciate it if someone could assist us with our project!
Thanks!
Jonathan
There are a few errors in your code:
Remove the call to otMessageSetLength(). The message length is automatically increased as part of otMessageAppend().
The call to otMessageAppend() should be: otMessageAppend(test_message, buf, (uint16_t)strlen(buf)).
Removed the & before buf.
Replaced sizeof() with strlen().
Couple other things you should consider:
After calling otUdpNewMessage(), if any following call returns an error, make sure to call otMessageFree() on the message buffer.
Custody is only given to OpenThread after a successful call to otUdpSend().
Do not call udpSend() from interrupt context.
OpenThread library was designed to assume a single thread of execution.
Hope that helps.

Boost ASIO and UDP Errors

I've got two test programs (A & B)that are nearly identical, that use the same boost asio UDP async code.
Here is the receive call:
_mSocket.async_receive_from(
boost::asio::buffer(_mRecvBuffer), _mReceiveEndpoint,
boost::bind(&UdpConnection::handle_receive, this,_mReceiveEndpoint,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
// _mReceiveEndpoint is known and good. the buffer is good too.
// here's the handler
void handle_receive(const udp::endpoint recvFromEP, const boost::system::error_code& error,std::size_t bytesRecv/*bytes_transferred*/)
{
boost::shared_ptr<std::string> message(new std::string(_mRecvBuffer.c_array(),bytesRecv));
if (!error)
{
doSomeThingGood();
}
else {
cerr << "UDP Recv error : " << error << endl;
}
}
So here's what happens, all on localhost.
If I start program 'A' first, then program 'B', 'A' gives a UDP Recv error : server:10061.
Program 'A' continues to send just fine and 'B' receives just fine.
You can swap 'A' and 'B' in the above sentence and it is still true.
IF I attempt a reset of the bad read condition by calling mSocket.async_receive_from again, I get error 10054.
I've looked these errors up on the web..... not very helpful.
Anybody have any ideas as to what these mean, and how I can recover inside the program if this condition occurs? Is there a way to reset the socket?
Sanity check.... can both programs operate on loopback with only two ports?
A send = 20000, A receive = 20001
B send = 20001, B receive = 20000
TL;DR
It appears as though if I try to listen before I'm sending, I get an error & I can't recover from it. If I listen after sending, I'm fine.
-- EDIT - It appears that McAfee host intrusion prevention is doing something nasty to me.... If I debug in VS2010, I get stuck in their DLL.
Thanks
In my receive handler, I wasn't calling _mSocket.async_receive_from() again.... I just printed the error and exited.
Silly mistake, just posting here in case it helps anyone else.
Also for a similar problem with a different resolution:
_mSocket.set_option(boost::asio::socket_base::reuse_address(true));
helps if you have multiple listeners.
Several sources explain that you should use SO_REUSEADDR on windows. But none mention that it is possible to receive UDP message with and without binding the socket.
The code below binds the socket to a local listen_endpoint, that is essential, because without that you can and will still receive your UDP messages, but by default your will have exclusive ownership of the port.
However if you set reuse_address(true) on the socket (or on the acceptor when using TCP), and bind the socket afterwards, it will enable multiple applications, or multiple instances of your own application to do it again, and everyone will receive all messages.
// Create the socket so that multiple may be bound to the same address.
boost::asio::ip::udp::endpoint listen_endpoint(
listen_address, multicast_port);
// == important part ==
socket_.open(listen_endpoint.protocol());
socket_.set_option(boost::asio::ip::udp::socket::reuse_address(true));
socket_.bind(listen_endpoint);
// == important part ==
boost::array<char, 2000> recvBuffer;
socket_.async_receive_from(boost::asio::buffer(recvBuffer), m_remote_endpoint,
boost::bind(&SocketReader::ReceiveUDPMessage, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)
source:
http://www.boost.org/doc/libs/1_45_0/doc/html/boost_asio/example/multicast/receiver.cpp

Detecting open PC COM port from USB Virtual Com Port device

I am using an STM32F105 microcontroller with the STM32_USB-FS-Device_Lib_V3.2.1 USB library and have adapted the VCP example for our purposes (integration with RTOS and serial API).
The problem is that if the USB cable is attached, but the port is not open on the Windows host, after a few minutes the device ends up permanently re-entering the USB ISR until the port is opened and then it all starts working normally.
I have instrumented interrupt handler and can see that when the fault occurs, the ISR handler exits and then immediately re-enters. This occurs because on exit from the interrupt the IEPINT flag in OTG_FS_GINTSTS is not clear. The OTG_FS_DAINT at this time contains 0x00000002 (IEPINT1 set), while DIEPINT1 has 0x00000080 (TXFE). The line in OTGD_FS_Handle_InEP_ISR() that clears TXFE is called, but the bit either does not clear or becomes immediately reasserted. When the COM port on the host is reopened, the state of OTG_FS_GINTSTS and OTG_FS_DAINT at the end of the interrupt is always zero, and further interrupts occur at the normal rate. Note that the problem only occurs if data is being output but the host has no port open. If either the port is open or no data is output, the system runs indefinitely. I believe that the more data that is output the sooner the problem occurs, but that is anecdotal at present.
The VCP code has a state variable that takes the following enumerated values:
UNCONNECTED,
ATTACHED,
POWERED,
SUSPENDED,
ADDRESSED,
CONFIGURED
and we use the CONFIGURED state to determine whether to put data into the driver buffer for sending. However the CONFIGURED state is set when the cable is attached not when the host has the port open and an application connected. I see that when Windows does open the port, there is a burst of interrupts so it seems that some communication occurs on this event; I wonder if it is possible therefore to detect whether the host has the port open,.
I need one of two things perhaps:
To prevent the USB code from getting stuck in the ISR in the first instance
To determine whether the host has the port open from the device end, and only push data for sending when open.
Part (1) - preventing the interrupt lock-up - was facilitated by a USB library bug fix from ST support; it was not correctly clearing the TxEmpty interrupt.
After some research and assistance from ST Support, I have determined a solution to part (2) - detecting whether the host port is open. Conventionally, when a port is opened the DTR modem control line is asserted. This information is passed to a CDC class device, so I can use this to achieve my aim. It is possible for an application to change the behaviour of DTR, but this should not happen in any of the client applications that are likely to connect to this device in this case. However there is a back-up plan that implicitly assumes the port to be open if the line-coding (baud, framing) are set. In this case there is no means of detecting closure but at least it will not prevent an unconventional application from working with my device, even if it then causes it to crash when it disconnects.
Regarding ST's VCP example code specifically I have made the following changes to usb_prop.c:
1) Added the following function:
#include <stdbool.h>
static bool host_port_open = false ;
bool Virtual_Com_Port_IsHostPortOpen()
{
return bDeviceState == CONFIGURED && host_port_open ;
}
2) Modified Virtual_Com_Port_NoData_Setup() handling of SET_CONTROL_LINE_STATE thus:
else if (RequestNo == SET_CONTROL_LINE_STATE)
{
// Test DTR state to determine if host port is open
host_port_open = (pInformation->USBwValues.bw.bb0 & 0x01) != 0 ;
return USB_SUCCESS;
}
3) To allow use with applications that do not operate DTR conventionally I have also modified Virtual_Com_Port_Data_Setup() handling of SET_LINE_CODING thus:
else if (RequestNo == SET_LINE_CODING)
{
if (Type_Recipient == (CLASS_REQUEST | INTERFACE_RECIPIENT))
{
CopyRoutine = Virtual_Com_Port_SetLineCoding;
// If line coding is set the port is implicitly open
// regardless of host's DTR control. Note: if this is
// the only indicator of port open, there will be no indication
// of closure, but this will at least allow applications that
// do not assert DTR to connect.
host_port_open = true ;
}
Request = SET_LINE_CODING;
}
I found another solution by adopting CDC_Transmit_FS.
It can now be used as output for printf by overwriting _write function.
First it checks the connection state, then it tries to send over USB endport in a busy loop, which repeats sending if USB is busy.
I found out if dev_state is not USBD_STATE_CONFIGURED the USB plug is disconnected. If the plug is connected but no VCP port is open via PuTTY or termite, the second check fails.
This implementation works fine for me for RTOS and CubeMX HAL application. The busy loop is not blocking low priority threads anymore.
uint8_t CDC_Transmit_FS(uint8_t* Buf, uint16_t Len)
{
uint8_t result = USBD_OK;
// Check if USB interface is online and VCP connection is open.
// prior to send:
if ((hUsbDevice_0->dev_state != USBD_STATE_CONFIGURED)
|| (hUsbDevice_0->ep0_state == USBD_EP0_STATUS_IN))
{
// The physical connection fails.
// Or: The phycical connection is open, but no VCP link up.
result = USBD_FAIL;
}
else
{
USBD_CDC_SetTxBuffer(hUsbDevice_0, Buf, Len);
// Busy wait if USB is busy or exit on success or disconnection happens
while(1)
{
//Check if USB went offline while retrying
if ((hUsbDevice_0->dev_state != USBD_STATE_CONFIGURED)
|| (hUsbDevice_0->ep0_state == USBD_EP0_STATUS_IN))
{
result = USBD_FAIL;
break;
}
// Try send
result = USBD_CDC_TransmitPacket(hUsbDevice_0);
if(result == USBD_OK)
{
break;
}
else if(result == USBD_BUSY)
{
// Retry until USB device free.
}
else
{
// Any other failure
result = USBD_FAIL;
break;
}
}
}
return result;
}
CDC_Transmit_FS is used by _write:
// This function is used by printf and puts.
int _write(int file, char *ptr, int len)
{
(void) file; // Ignore file descriptor
uint8_t result;
result = CDC_Transmit_FS((uint8_t*)ptr, len);
if(result == USBD_OK)
{
return (int)len;
}
else
{
return EOF;
}
}
Regards
Bernhard
After so much searching and a kind of reverse engineering I finally found the method for detecting the open terminal and also it's termination. I found that in the CDC class there is three Data nodes , one is a control node and the other two are data In and data Out nodes.Now when you open a terminal a code is sent to the control node and also when you close it. all we need to do is to get those codes and by them start and stop our data transmission tasks. the code that is sent is respectively 0x21 and 0x22 for opening and closing the terminal.In the usb_cdc_if.c there is a function that receive and interpret those codes (there is a switch case and the variable cmd is the code we are talking about).that function is CDC_Control_FS . Here we are, Now all we need to do is to expand that function so that it interpret the 0x22 and 0x21 . there you are , now you know in your application whether the port is open or not.
I need one of two things perhaps:
To prevent the USB code from getting stuck in the ISR in the first instance
To determine whether the host has the port open from the device end, and only push data for sending when open.
You should attempt to do option 1 instead of 2. On Windows and Linux, it is possible to open a COM port and use it without setting the control signals, which means there is no fool-proof, cross-platform way to detect that the COM port is open.
A well programmed device will not let itself stop functioning just because the USB host stopped polling for data; this is a normal thing that should be handled properly. For example, you might change your code so that you only queue up data to be sent to the USB host if there is buffer space available for the endpoint. If there is no free buffer space, you might have some special error handling code.
I have the same requirement to detect PC port open/close. I have seen it implemented it as follows:
Open detected by:
DTR asserted
CDC bulk transfer
Close detected by:
DTR deasserted
USB "unplugged", sleep etc
This seems to be working reasonably well, although more thorough testing will be needed to confirm it works robustly.
Disclaimer: I use code generated by Cube, and as a result it works with HAL drivers. Solutions, proposed here before, don't work for me, so I have found one. It is not good, but works for some purposes.
One of indirect sign of not opened port arises when you try to transmit packet by CDC_Transmit_FS, and then wait till TxState is set to 0. If port is not opened it never happens. So my solution is to fix some timeout:
uint16_t count = 0;
USBD_CDC_HandleTypeDef *hcdc =
(USBD_CDC_HandleTypeDef*) USBD_Device.pClassData;
while (hcdc->TxState != 0) {
if (++count > BUSY_TIMEOUT) { //number of cycles to wait till it makes decision
//here it's clear that port is not opened
}
}
The problem is also, that if one tries to open port, after device has tried to send a packet, it cant be done. Therefore whole routine I use:
uint8_t waitForTransferCompletion(void) {
uint16_t count = 0;
USBD_CDC_HandleTypeDef *hcdc =
(USBD_CDC_HandleTypeDef*) USBD_Device.pClassData;
while (hcdc->TxState != 0) {
if (++count > BUSY_TIMEOUT) { //number of cycles to wait till it makes decision
USBD_Stop(&USBD_Device); // stop and
MX_USB_DEVICE_Init(); // init device again
HAL_Delay(RESET_DELAY); // give a chance to open port
return USBD_FAIL; // return fail, to send last packet again
}
}
return USBD_OK;
}
The question is, how big timeout has to be, not to interrupt transmission while port is opened. I set BUSY_TIMEOUT to 3000, and now it works.
I fixed it by checking of a variable hUsbDeviceFS.ep0_state.
It equal 5 if connected and 4 if do not connected or was disconnected.
But. There are some issue in the HAL. It equal 5 when program started.
Next steps fixed it at the begin of a program
/* USER CODE BEGIN 2 */
HAL_Delay(500);
hUsbDeviceFS.ep0_state = 4;
...
I do not have any wishes to learn the HAL - I hope this post will be seeing by developers and they will fix the HAL.
It helped me to fix my issue.