Virtual Serial Port Example - objective-c

I need to communicate with some custom hardware that will use either a FTDI or Silicon Labs usb to serial driver.
I found a couple examples but they are older and was hoping for a more up to date example. Plus, I have been confused by the new AppleUSBFTDI kernel driver in how that works with the IOKit and other chips like the Silicon Labs part. It would be nice to have one program that doesn’t care which driver is used.
I have already looked at this example:
FTDI Communication with USB device - Objective C

The nature of these drivers and devices is that they are supposed to function as a standard serial port virtually over USB. So in terms of access it should be no different than accessing a standard RS232 COM port.
I would suggest reading the Serial Programming Guide for POSIX Operating Systems. I'm not sure what older examples you're seeing but serial access itself is many years old, but the idea behind communicating to the serial device is the same in the case of these USB to serial bridge devices.
For information on some Objective-C frameworks, take a look at this Stack Overflow post.
Finally, here is an article directly from the Apple documentation, Working With a Serial Device, and you'll see it also references the POSIX style API.
You should simply need to install the driver associated with your device and plug it in for this to work. In terms of the Silicon Labs CP210x device just download and install the OSX driver. Then plug in your device. This is where the one difference may show up, the name of the tty device on the system (it will show up in the /dev directory). In the case of the CP210x it will show up and be accessible as tty.SLAB_USBtoUART or cu.SLAB_USBtoUART. This will be the name of the device you should open, then use and API from above to start your communication.

Related

Windows machine as USB-488/USBTMC device

I would like to use a windows machine as a USB488/USBTMC device. USB488/USBTMC is a reimplementation of the good old GPIB/IEEE-488 on USB rails. But most articles on the topic refer to a Windows machine as a host/controller. The Windows USB stack is not well suited for USB device/USB OTG modes. However, if you look at some of the high-end gear like oscilloscopes and spectrum/network analyzers, it is well known that they are often Windows machines inside with some additional hardware. So, how it is done?
To some background: it is a project to retrofit a very old SEM microscope with new hardware. The current one is a 68k custom system with a CRT that uses a GPIB interface for comm with a PC. Things like sample spectroscopy are done as a BASIC program running on a pc and communicating through that gpib port. The plan is to replace that 68k junk with a modern day windows pc with an FPGA on a PCIe bus. For compatibility reasons, it would be nice to have a usb488 port in the new PC. Though I have no idea of how to do it properly. The only solution I have so far is to have some cheap USB-capable micro hanging on the SPI bus on the FPGA facing side and a USBTDM class on the USB side. But maybe Im missing something and there is a specific thing or chip that exists that can do it that Im not aware of.
I can only speculate how high-end oscilloscopes achieve it. The most likely option is that they use a dedicated chip like a MAX3420E. It is connected via SPI. Part of the USB protocol is implemented by the chip, part of it will be implemented by the oscilloscope software.
Most USB controllers chips found in PCs can operate as the host only. And even if they could do a role swap, Windows (for Desktop) has not supported device/peripheral mode until recently. It now does. See USB Dual Role Driver Stack Architecture. But I don't fully understand it to tell you what hardware you would need to purchase where this feature is enabled.
Role swapping is very common on smartphones. It is also implemented in Linux (search for "Linux USB gadget"). Many Apple Macs can run in Target Disk Mode, which is a USB device/peripheral mode as well.

Why does an USB CDC device require a host driver in windows?

What is the additional advantage of developing a full UMDF windows driver if a CDC device is detected as a virtual COM port?.
I have some experience with embedded examples using a microcontroller, both communicating with a terminal like teraterm or using a dedicated USB peripheral that allows for cdc or hid functionality.
Are these drivers the ones you download and install before using any USB device? It is not clear yet what features are avaialble through the host driver.
Often on Windows the "driver" for a USB serial device is no more than an inf file that maps the device VID/PID to the Microsoft usbser.sys CDC/ACM driver.
Recent releases of Windows 10 appear to have stopped insisting that each CDC/ACM have a VID/PID specific inf file, and will load the standard driver for any device that presents as a CDC/ACM device.
The advantage of having a VID/PID specific inf file is that your device can have a vendor specific "friendly name" which can be used in applications to more easily identify your device, rather than just appearing as a generic "USB Serial Device".
One problem with Microsoft's usbser.sys driver (and Linux and Mac OS are no different) until recently was that if you disconnect the USB device, the driver is unloaded even if an application has the COM port open, and the application must close and reopen the port to recover when reconnected. I have previously used a custom driver (provided by a third party), that does not unload if an application has the port open, so that when the USB cable is reconnected, the data connection continues as normal, just as it would if it were an RS-232 cable. However, again in recent versions of Windows 10 I have noticed that usbser.sys appears to exhibit this behaviour in any case.
Note that when you do provide your own driver file, or even just a custom inf file, you will be required to go through WHQL testing in order for your device to be allowed on Windows 10, or to load without warnings on earlier versions. To do that you will need a USB.org assigned VID, an Extended Validation code-signing certificate, and either the tools and facilities to perform the testing, or pay a test house to perform the testing. That all gets somewhat expensive, and may be prohibitive for low volume, low value or non commercial products.
At one time Microsoft too charged for WHQL processing submissions, but no longer do so. That is however little combination since at the same time they changed to requiring an EV certificate and stopped a deal the used to have for low-cost certificate.
The advantage of WHQL qualification is your driver will be provided by Windows Update.
If you are using a USB serial bridge chip rather than your own USB stack and USB controller, then there is a lower cost solution. These chips can be customised with your own VID/PID and descriptors, and the vendor's existing WHQL can be associated to your device so you get all the advantages of your own driver without the costs. Most vendors will even allow you to use their VID and will assign you a unique PID so you can avoid USB.org fees. I have used this route with both Prolific and FTDI devices; it is by far the most cost effective solution.

How does USB integration work from the device end?

Hopefully I will have more luck today. I have no prior USB integration and about 8 months of learning embedded systems on Atmel devices. I am trying to use an Atmel SAM L series to connect over USB to a computer. The use case is for data transfer. Specifically, the MCU will be gathering data from it's sensors and packaging it for USB transfer.
I have searched through and read up on all of Atmel's included USB examples. I have also started reading through usb.org's class specifications for CDC.
I have running now something that lets me send data along one com port, into the target usb and then out the debugger usb to another com port. However, I don't think this is real USB.
My problem is two fold.
1.) I do not fully understand what differentiates USB from serial communication on a com port.
2.) Even if I were doing it correctly, I'm not sure how to test and verify that I have indeed created a legitimate USB device that can be accepted by a host computer.
Links to documentation(Atmel or generic) or example code would be appreciated.
1) USB is defined in the USB specifications from http://www.usb.org. Serial ports were an older and simpler interface that involved sending data back and forth asynchonously on pins with names like TX and RX. The USB CDC class and its ACM subclass allow you to make a USB device that emulates a serial port. If you make your device be a USB CDC ACM device, then you don't need to supply any drivers for Windows 10, Linux, or Mac OS X.
2) You can read the USB specification and the CDC ACM specification. You can run the USB command verifier. You can test your device with a variety of different USB hosts to make sure it works.

UART over USB for STM32 Micro-controller

I'm trying to implement UART over a USB interface on the STM324x9I-EVAL development board. The purpose is to send commands to a servo controller (or other hardware, for that matter) serially. I've successfully implemented the USB_Device_CDC example on the development board but am unsure exactly how this works without a PC with drivers on the other end. As far as other hardware is concerned, will the USB port now simply look like a serial port? Or is there still a need for a driver or some sort of interface on the other end?
I do want to point out that I'm aware of the following post:
Emulating UART over USB
but I don't believe my question is fully answered within the context of that answer.
A USB connection is not a peer-to-peer connection like a UART. It requires a host and a device in a master/slave relationship. The device cannot initiate data transfer; it must be continuously polled by the by the host.
A CDC/ACM class device presents a virtual COM port on a PC host, but that does not allow the device to communicate with a UART interface. It looks like a serial port at the software level, but does not implement a UART physical layer. There is an awful lot going on under the hood to make it look like a PC serial port, but none of it resembles UART communications at the physical level.
There are devices that act as UART/USB bridges (from FTDI and Prolific for example), and you could (somewhat expensively) build your own from a microcontroller that has a USB device controller and a UART, but the bridge is a USB device and must still connect to a USB host; these are normally used to connect a PC to a microcontroller that lacks a USB controller or where the software/CPU overhead of using a USB controller is too great.
In theory you could connect a microcontroller that has a USB host controller to one that has a USB device controller, but you need host and device software stacks on each respectively, and once you have the USB connection, implementing CDC/ACM is a somewhat inefficient use of the available bandwidth. The purpose of the CDC/ACM class is primarily to allow "legacy" software to work on a PC.
If you need to connect to a "real" serial port, you should use a real UART - which are far more ubiquitous than USB controllers on microcontrollers in any case.
You should learn a little bit about USB device classes. CDC is a USB device class, and ACM is a subclass that I assume you are using. The device you made could be called a "CDC ACM device" because it uses the CDC class and the ACM subclass.
These classes and subclasses are defined by the USB Implementers Forum in documents that you can find here:
http://www.usb.org/developers/docs/devclass_docs/
These documents specify things like what USB descriptors a CDC ACM device should have in order to describe itself to the host, and what kinds of interfaces and endpoints it should have, and how serial data will be represented in terms of USB transactions and transfers.
Note that CDC ACM only specifies some USB commands for transferring data between the host and the device. It does not specify what the device will actually do with that data. You can use CDC ACM to implement a USB-to-serial adapter, or you can just use it as a general purpose communication interface for whatever data you want to send.
Yes, you do need a driver on the PC side. The driver needs to be designed to run on your specific operating system. It needs to create some kind of virtual serial port device in your operating system that other software (which only knows about serial ports) can find and connect to. It needs to translate serial port operations performed by other software on the serial port (e.g. writing some bytes to the serial port) into low-level USB commands according to the CDC ACM specifications (e.g. sending some bytes out to the device on a particular endpoint in the form of USB packets). It needs to somehow know which USB devices it should operate on, since not every USB device is a CDC ACM device.
For Windows, you will probably use the usbser.sys driver which comes with Windows. For versions of Windows older than Windows 10, you will need to write an INF file to associate your device to usbser.sys and sign it. For Windows 10 and later, there is a new INF file called usbser.inf already included with Windows which will automatically match any valid CDC ACM device. This means you don't have to write or distribute a driver for CDC ACM devices if you only intend to support using the device on Windows 10 or later. The partnership between Microsoft and Arduino which began in 2015 gives me hope that Microsoft will continue supporting and improving usbser.sys in the future. In fact, they claim that in Windows 10 "the driver has been rewritten by using the Kernel-Mode Driver Framework that improves the overall stability of the driver", so that is good news.
For Linux, there is the cdc_acm kernel module, which has been a standard part of the kernel for a long time and should work automatically with any CDC ACM device you plug in.
For Mac OS X, there is the AppleUSBCDCACM driver, which should work automatically with any CDC ACM device you plug in.
Note that for any of these drivers to recognize your device and work with it, your device has to have certain values in its USB descriptors, and the requirements can vary depending on what exact driver version you are talking about.
Will the USB port now simply look like a serial port?
No, that's the wrong way to think about it. The USB port will still look like a USB port, but the various USB drivers provided by your operating system will recognize that a CDC ACM device is plugged into that port and create a new entry in your operating system's list of serial ports. Then if you run some software that only knows about serial ports, it can connect to that port.
In fact, if you make a composite device, you can have a single USB device plugged into a single USB port that actually has two or more virtual serial ports.

Programming USB in embedded system for sending some data to host for printing

I have been tasked with writing a USB driver for our embedded software to send raw data to Host. This will be used to send some logging data to host. We are using iMX31 litekit for development.
From the documents that I have read on USB, my understanding is that the embedded device will be in device mode only. Also it will only be communicating with host machine.
So can any one guide me here? Any article, reference or code is welcome.
Some things to consider:
Is this a high bandwidth device like a camera or data recorder, or a low bandwidth device?
For low bandwidth, I would strongly consider making your device act as a USB HID class. This is the device class that supports keyboards, mice, joysticks, gamepads, and the like. It is relatively easy to send data to nearly any application, and it generally doesn't require that you write a custom device driver on the host side. That latter feature alone is often worth the cost of lightly contorting your data into the shape assumed by the HID class. All the desktop operating systems that do USB can use HID devices, so you get broad compatibility fairly easily.
For high bandwidth, you would still be better served if your device fits one of the well established device classes, where a stock device driver on the host end of the wire can be used. One approach that often works is to use the Mass Storage class, and emulate a disk drive containing one file. Then, your device simply mounts on the host as if it were a disk, and you communicate by reading and writing to one (or a few) file.
I would expect there to be a fair amount of sample code out there for any serious USB device chipset that implements either or both of HID and Mass Storage.
If you really must wander into fully custom device territory, then you will need to be building device drivers for each host platform. The open source libusb library can be of some help, if its license is compatible with your project. There are also ways in newer versions of Windows to develop USB drivers that run in user mode using the User Mode Driver Framework that have many of the same advantages of libusb, but are not portable off the Windows platform.
The last custom device I worked on was based on a Cypress device, and we were able to ship their driver and an associated DLL to make our application code easier to build. I don't know off the cuff if there is any equivalent available for your device.
For a really good overview, I recommend the USB FAQ, and the latest edition of Jan's book, USB Complete.