Can anyone suggest a practical use of OUT Interrupt endpoint in USB 2.0?
Why does host need to interrupt the device when it is the master?
Regards
The benefit of having an OUT interrupt endpoint is that you get guaranteed bandwidth at all times no matter what else is happening on the bus. So you could use it to transfer data that is time sensitive, such as movement commands to a motor or something like that.
Related
I am working on BLE application on a embedded platform where there are frequent connect/disconnect events. The issue I am seeing is re-connection takes too long. The high frequency of connect/disconnect is a part of usage scenario so I can't change that. What I can do is make the re connection more efficient. I noticed, the bulk of re-connection is spent of service/characteristic discovery of other devices.
I still want to make sure the service/characteristic of the connecting device hasn't been changed. In stead of discovering all the service , can we instead use a characteristic that has the hash of all the service/characteristic on the device? So each device can compare the received hash with the stored one. Only in case of mismatch perform full service discovery. Is there a precedent of doing it in BLE?
Bluetooth Low Energy (BLE) allows devices to leave their transmitters off most of the time to achieve its “Low Energy”.
I would expect a central device to subscribe to notifications from the peripheral. That way the peripheral only turns on and transmits when there are updates.
The other approach would be to put the data (or hash) in the advertising data (manufacturer data or service data) like many sensor beacons do. That way you might not need to connect at all or only connect if needed.
I'm developing a USB device driver for a microcontroller (Atmel/Microchip SAMD21, but I think the question is a general one). I need multiple endpoints for control & data, and the USB hardware uses per-endpoint descriptors to (among other things) locate buffers for input and output data.
Since IN data is polled at the host's discretion it makes sense that each endpoint has its own IN buffer, so that any endpoint's data (if it has any to send) is immediately available when polled.
But as far as incoming data from SETUP & OUT transactions is concerned, it occurs to me that I can save memory by configuring all endpoints to use a shared buffer. It seems wasteful for each endpoint to have its own buffer when, given the nature of USB transactions, only one such transaction can occur at a time.
Obviously this approach requires that transaction interrupts are handled sufficiently quickly that the shared buffer is freed and prepared for a new transaction in time for whatever the next transaction might be - but this is already a requirement for the control endpoint, where some SETUP transactions are immediately followed by an OUT.
So, assuming the timing is feasible, is there any other reason why such an approach wouldn't work?
Probably not.
Normally, the USB module on a microcontroller handles OUT packets by keeping track of which packet buffers it has written data to, and it waits for your firmware to say it is done processing the buffer before accepting more data from the computer and overwriting the buffer. If an endpoint has no buffers available to receive more data, but the computer sends an OUT packet to the endpoint, the USB module typically responds to the computer with a NAK packet, which tells the computer it should retry later. In this situation, your firmware can take pretty much as long as it wants to handle the OUT packets.
By having multiple endpoints configured to use the same buffer, you mess up this system. When you receive an OUT packet on any of your endpoints, the USB module would (probably) not know that multiple endpoints use the same buffer, so it would not issue NAK packets on your other OUT endpoints. If it receives another OUT packet right away, it would write it to the same buffer, overwriting the previous packet. Therefore, whenever you receive a packet, your code would have to rush as fast as it can to do something like copying the data out of that buffer, disabling other OUT endpoints, or reassigning buffers.
Even if you can actually get this to work, it means that your scheme to save a little bit of memory turns the servicing of USB events into a real-time task (i.e. a task that requires responses from your code in a few microseconds). If you want to add another real-time task to your system later, it will be very difficult, because you always have to be ready to be interrupted by your USB-handling code.
The SAMD21 has tons of memory (32 KB) so you probably don't need to worry about optimizing this part of it.
I agree with David's Response. You didn't mention the speed of the device you are creating. A low-speed would need just a few 8-byte buffers. A full-speed, a few 64-byte buffers. High-speed, maybe eight 64-byte buffers, depending on your use. A super-speed device, your still only talking a few 512-byte buffers.
I would create a ring buffer for each endpoint. This way you are not moving data around. You are simply using a pointer that points to an entry within a memory ring. A full-speed device with a control endpoint, an interrupt endpoint, and two bulk endpoints, each endpoint having sixteen 64-byte entries per ring, is still only a total of 4k RAM, 1/8th of the total RAM.
However, I am not familiar with the SAMD21, so please check the specification to be sure this will work.
I'm developing an embedded app, written in C, using a M16C/28 uC from Renesas.
The app manages two simple task:
RFID for detection and reading MIFARE tags. ( Using HW: Mf500 from NXP ). The uC handles whole FW implementation.
To deal with a RS485 frame protocol as slave. ( This app, have to be able to process RS485 frames every 10ms ).
The RFID implementation contains blocking code and the time response to detect a RFID tag is about 15ms. This causes RX reception buffer overflows on the RS485 processing.
My questions are as follows:
Is it normal to deal with such time responses in the RFID world?
Should I use a RTOS to preempt RFID task to meet RS485 frames requirements?
Should I use an external uC acting as host controller to release the load of the RFID manager uC?
Thanks in advance
To answer your questions:
Depends
You could use a RTOS.
You could use an additional uC.
Better options would be to:
Use DMA on serial communications.
Make the RFID code non blocking.
Do more in your serial interrupt.
The response time varies depending on the type of card/rfid that your are communicating with. I don't know the timings of Mifare RFIDs but 15 ms does not seem to be bad.
In your situation, you may have more requests coming from RS485 than you can handle on the RFID part. You can use queues or FIFOs to store the input requests so that you can treat them later on, according to the physical limitations of your system.
Using an RTOS can help but usually, they are not free. Plus, you may have to port it to your platform if it is not already supported. If all your firmware does is handling RS485 requests and communicating with the RFID, you should sort this out with interruptions to store the incoming commands and a loop to process them separately.
And for the second uC, it's like the RTOS. It can help but might not be the right solution in this scenario (you will have to manage 2 firmwares, a communication protocol or a FIFO between uCs, it will cost twice the price, ...).
I'm writing code to learn about the USB peripheral on a Freescale Kinetis microcontroller. I've managed to get through enumeration on a Linux host, and I can send & receive packets using vendor-custom codes on EP0, interacting with a libusb test program.
It looks like I can configure additional control endpoints (non-zero endpoint numbers) on the microcontroller, but I don't see a way to make libusb send / receive control transfers to those endpoints. (libusb_control_transfer doesn't require an endpoint number, though libusb_bulk_transfer and libusb_interrupt_transfer do.)
Are non-zero control endpoints so uncommon or unnecessary that it's not worth bothering with them? Is there some way to get libusb to execute control transactions to non-zero endpoints?
Is there some way to get libusb to execute control transactions to non-zero endpoints?
You can try to modify the endpoint field in the libusb_transfer structure of the asynchronous I/O API.
But it would surprise me if your microcontroller could actually support non-zero control endpoint(s) - not that many do.
In practise you would rather use either interrupt or bulk endpoints. Both have less overhead - allowing higher throughput with bulk transfers (see for example USB 2.0 SPEC Table 5-2 vs. Table 5-9).
I am creating a server on a ST Cortex M3 device. I am using the lwip API and FreeRTOS. All is working, but the response time is way off. I am currently using lwip 1.3.2 and FreeRTOS 7.3.
A single client connects to the server and must have some time-critical data sent frequently. These packets are on the order of 6 or so bytes. Other times, I am sending upwards of 20K.
The problem I am having is that these smaller packets seem to be taking forever to be sent. I assume this is because lwip is waiting for more data to be enqueued to make more efficient transmissions. I cannot wait around for 2 or 3 seconds for the data to be sent; the client is expecting the data nominally in a few micro-seconds or milli-seconds.
I have tried using lwip_send and lwip_write. (I understand that one is the same as the other with a flag passed at the end. Just had to try...) I have tried setting TCP_NODELAY on the socket to no avail. I tried to set SO_SNDLOWAT to '1', but this always returned -1, so I do not think it is supported.
I do not want to redo all of my code using TCP RAW. Is there a way to invoke the tcp_output() function outside of TCP RAW mode? Is there any way to speed things up or is this just how slow lwip TCP with small packets is?
Any and all suggestions are welcome. Thanks.
--EDIT--
I would also like to add that once I am ready to transmit, I make sure that my TX task in FreeRTOS is at the highest priority. There are no other tasks running up to the point at which I call lwip_send/write.
I'm fairly experienced with bare metal lwIP on xilinx and lwip does not wait to send things out. It will pump packets out as fast as your interrupts are acknowledged based on the ethernet hardware. I've been using UDP only. What is coming to mind though, is your problem might be on the receive end. If you are doing TCP, maybe those small packets are coming out late because you are having receive issues. What you need to do is find in the code the lowest level point at which ethernet is transmit, put a general purpose output toggle on that. Then also put a general purpose output toggle on when a ethernet packet is received. Look at the signals on a scope. If it confirms your hypothesis, then move the output toggles around to narrow down the issue. Wash, rinse and repeat until you are down to where the issue its. It's crude and time consuming, but oftentimes this brute force approach solves many "impossible" embedded software problems, due to pure determination. Good luck!