I would like to implement usb communication at a speed of 30Mbit/sec. My hardware support "high speed usb" so the hardware platform will not limit me.
Can I implement this speed using USB CDC class, or Mass storage class, or are these usb classes speed limited?
In USB protocol who determines the bit rate, is it the device?
The USB CDC and mass storage classes do not have any kind of artificial speed limiting, so you can probably get a throughput of 30 Mbps on a high-speed USB connection (which uses 480 Mbps per second for timing bits on the wire). The throughput you get will be determined by how much bus bandwidth is being used by other devices and how efficiently your device-side firmware, host-side driver, and host-side software operate.
The bit rate is mostly determined by the device. The device basically signals to the host what USB speeds it supports, and the host picks one. The full story is a little bit more complicated, and there are a lot more details about how that works in the USB specification.
Related
I'm just getting into the massive topic of learning UEFI driver development and from what I understand so far, hardware peripherals are controlled using specific addresses mapped to memory. Well, the memory is hardware too. Is it not controlled by drivers?
I assume the CPU and motherboard have built-in circuits that handle this, but my curiosity is whether drivers have any hardware level control to this handling. I'd just prefer to know for sure and I'm not sure what manual would explain this.
[kernel/UEFI] driver <-> memory mapped address <-> firmware [hardware:keyboard]
[kernel/UEFI] driver <-> ? <-> firmware [hardware:RAM]
guess:
spec
driver <-> CPU microcode <-> motherboard circuit <-> firmware
I just think assumptions are bad and can't find a citation confirming the probable answer. The answer is relevant to security and which supply chain / standard we're trusting. Like PCIe or NVMe are standard specs, perhaps there's a standard for RAM <-> CPU communication?
Maybe this question is a better fit for an Engineering SE site?
From software development perspective, there isn't a driver for RAM control ‐ it's exposed as an indistinguishable part of the hardware via the Instruction Set for a given Architecture (ISA). Just like CPU is hardware, but without a driver to control it. Reason for drivers is access to hardware unknown to the ISA, such as a USB device, a particular manufacturer's SSD (which may or may not be present at power up time), graphics hardware, etc. Just like CPU, RAM must be present at power up ‐ you won't get much further than an error code via lights and beeps without RAM in your system at power up time. This isn't the case for most, if not all, other hardware; such other hardware is therefore optional, and isn't part of the ISA definition; drivers (software) are needed to access such optional hardware.
RAM is managed by both hardware (CPU; see Intel manual, volume 3), especially modern hardware, which provides virtualization support for a modern OS, including paging support, and software (the OS memory allocator), for purposes of virtualizing RAM for the running processes. No drivers though ‐ just addressing via ISA instructions.
If you're looking for an answer from a hardware perspective, such as the details of the bus circuitry, which CPU pins are involved, exact protocol, etc., then this question is a better candidate for Engineering StackExchange site.
I'm writing a linux app that talks from user space "directly" to a PCIe card, via DMA, without interrupts and kernel.
My aim is to minimize the data travel time between the card and my app.
Currently I'm getting latencies of about 800ns, while I was expecting about half that.
Questions:
1. Is there a way to configure PCIe latency? what are the parameters?
2. If not, are there motherboards with faster PCIe bus? I looked at few boards but didn't see PCIe latency numbers... what boards are good for that stuff?
Thanks!
I am looking into ways to stream a usb camera over long distances. The plan is to plug the USB into a raspberry pi2, then send it over wifi via TCP. Is this a viable plan?
I have found a few approaches like this:
https://www.virtualhere.com/
But They seem to be for printers and other 'static' devices. Will a streaming data connection work with this type of approach?
Thanks.
Yes, a USB camera uses the "isochronous" transfer mode of the USB protocol.
VirtualHere and most other usb/ip software support this mode.
The main issue is network latency. The lower the webcam resolution, the less latency will be a problem. Also, the pi2 has a fast CPU however the USB bus is shared with the Ethernet so by definition throughput is cut in half, perhaps looking at a embedded board that doesn't share the USB bus like a beaglebone or odroid-c1 or other board would provide better performance for the same price.
(I am the author of VirtualHere)
We are developing a USB Driver for Ethernet device for WinCE 6.
We are finding performance issues and could narrow down to them USB Stack, using profiling of code. 95% of the time in Tx path is taken in IssueBulkTransfer, which causes the driver to queue packets internally. TX-COMPLETE routine call is not in sync with IssueBulkTransfer.
We have used USB analyzer to check the USB bandwidth usage and found it as 20-30% of total bandwidth. So hardware is fast enough to transfer data across the interface.
With above findings bottleneck seems like in the USB bus Driver and USB HCD Driver.
Is there any known performance limitation with WinCE 6 USB Stack?
What is the maximum speed we can get with High speed device (USB 2.0) using WinCE 6.0 USB stack?
Are you using sync transfers? If you use async ones you may be able to queue multiple packets for tx or rx and the host driver will not have to wait till your driver receives the completition notification to issue a new tx or rx request. This may allow you to use more bandwidth. You may also allocate buffers using HalAllocateCommonBuffer or by reserving some physical memory range for buffers. In this way you may avoid copies in the driver if the driver can use DMA.
You did not provide details about your HW architecture, it's difficult to estimate the level of performances you may expect.
First up, I don't know much about USB, so apologies in advance if my question is wrong.
In USB 2.0 the polling interval was 0.125ms, so the best possible latency for the host to read some data from the device was 0.125ms. I'm hoping for reduced latency in USB 3.0 devices, but I'm finding it hard to learn what the minimum latency is. The USB 3.0 spec says, "USB 2.0 style polling has been replaced with asynchronous notifications", which implies the 0.125ms polling interval may no longer be a limit.
I found some benchmarks for a USB 3.0 SSDs that look like data can be read from the device in just slightly less than 0.125ms, and that includes all time spent in the host OS and the device's flash controller.
http://www.guru3d.com/articles_pages/ocz_enyo_usb_3_portable_ssd_review,8.html
Can someone tell me what the lowest possible latency is? A theoretical answer is fine. An answer including the practical limits of the various versions of Linux and Windows USB stacks would be awesome.
To head-off the "tell me what you're trying to achieve" question, I'm creating a debug interface for the ASICs my company designs. ie A PC connects to one of our ASICs via a debug dongle. One possible use case is to implement conditional breakpoints when the ASIC hardware only implements simple breakpoints. To do so, I need to determine when a simple breakpoint has been hit, evaluate the condition, if false set the processor running again. The simple breakpoint may be hit millions of times before the condition becomes true. We might implement the debug dongle on an FPGA or an off-the-shelf USB 3.0 enabled micro-controller.
Answering my own question...
I've come to realise that this question kind-of misses the point of USB 3.0. Unlike 2.0, it is not a shared-bus system. Instead it uses a point-to-point link between the host and each device (I'm oversimplifying but the gist is true). With USB 2.0, the 125 us polling interval was critical to how the bus was time-division multiplexed between devices. However, because 3.0 uses point-to-point links, there is no multiplexing to be done and thus the polling interval no longer exists. As a result, the latency on packet delivery is much less than with USB 2.0.
In my experiments with a Cypress FX-3 devkit, I have found that it is easy enough to get an average round trip from Windows application to the device and back with an average latency of 30 us. I suspect that the vast majority of that time is spent in various OS delays, eg the user-space to kernel-space mode switch and the DPC latency within the driver.
I've got a couple of resources for you, one I've just downloaded which is the complete specs ... several pdfs zipped up for USB3, and here is short excerpt from page 58,59 (USB 3_r1.0_06_06_2011.pdf):
USB 2.0 transmits SOF/uSOF at fixed 1 ms/125 μs intervals. A device driver may change the interval with small finite adjustments depending on the implementation of host and system software. USB 3.0 adds mechanism for devices to send a Bus Interval Adjustment Message that is used by the host to adjust its 125 μs bus interval up to +/-13.333 μs.
In addition, the host may send an Isochronous Timestamp Packet (ITP) within a relaxed timing window from a bus interval boundary.
Here is one more resource which looked interesting which deals with calculating latency.
You make a good point about operating system latency issues, especially in not real time operating systems.
I might suggest that you check on SuperUser too, maybe someone has other ideas. CHEERS
I dispute the marked answer.
On Windows there is no way to achieve the stated roundtrip latency over USB. SuperSpeed (3.0) or not. The documentation states:
The number of isochronous packets must be a multiple of the number of packets per frame.
https://learn.microsoft.com/en-us/windows-hardware/drivers/usbcon/transfer-data-to-isochronous-endpoints
The packets per frame is given by the bIntervaland also determines the polling interval. E.g. if you want to achieve a transfer every microframe (125usec) you will need to submit 8 transfers per URB (USB Request Block), which means a scheduling service interval of 1ms.
Anything else requires your own kernel-mode driver or is out-of-spec.
On RT Linux I can confirm roundtrips of 2*125usec + some overhead.
Excerpts from embedded.com: "USB 3.0 vs USB 2.0: A quick reference summary for the busy engineer"
Communication architecture differences
USB 2.0 employs a communication architecture where the data transaction must be initiated by the host. The host will frequently poll the device and ask for data, and the device may only transmit data once it has been requested by the host. The high polling frequency not only increases power consumption, it increases transmission latency because the data can only be transmitted when the device is polled by the host. USB 3.0 improves upon this communication model and reduces transmission latency by minimizing polling and also allowing devices to transmit data as soon as it is ready.
...
Timestamp enhancements
Unlike USB 2.0 cameras, which can range in accuracy from 0 to 125 us, the timestamp originating from USB 3.0 cameras is more precise, and mimics the accuracy of the 1394 cycle timer of FireWire cameras.
...
USB 3.0 -- or Super-speed USB -- overcomes key limitations of other specifications all these limitations with six (over IEEE 1394b) to nine (over USB 2.0) times higher bandwidth, better error management, higher power supply, ... and lower latency and jitter times.
P.S. also it says about "longer cable lengths" for USB 3.0, but other paragraph contradicts to this & says upto 5m for USB 2.0, upto 3m for USB 3.0.