Modelling problem - Networked devices with commands - oop

I encountered a head scratching modelling problem today:
We are modelling a physical control system composed of Devices and NetworkDevices. Any example of a Device is a TV. An example of a NetworkDevice is an IR transceiver with Ethernet connection.
As you can see, to be able to control the TV over the internet we must connect the Device to the NetworkDevice. There is a one to many relationship between Device and NetworkDevice i.e. TV only has one NetworkDevice (the IR transceiver), but the IR transceiver may control many Devices (e.g. many TVs).
So far no problem.
The complicated bit is that every Device has a collection of Commands. The type of the Command (e.g IrCommand, SerialCommand - N.B. not currently modelled) depends on the type of NetworkDevice that the Device is connected to.
In the current legacy system the Device has a collection of generic Commands (no typing) where fields are "interpreted" depending on the NetworkDevice type.
How do I go about modelling this in OOP such that:
You can only ever add a Command of the appropriate type, given the NetworkDevice the Device is attached to?
If I change the NetworkDevice the Commands collection changes to the appropriate type
Make it so the API is simple/elegant/intuitive to use

You could use the Abstract Factory Pattern. The idea would be to give the Device a factory for creating Commands. The type of the factory would depend on the type of the NetworkDevice. So, if a Device is connected to an IR-Controller, it will get an IRCommandFactory.

Related

How to programmatically find mount/eject usb device on macOS?

I am trying to unmount/eject usb device programmatically on macOS.
Using IOkit I tried to register to IOServiceMatching(kIOUSBInterfaceClassName) and iterate over all devices and for each device i tried getting the BSD name and go from there:
IORegistryEntrySearchCFProperty(usbDevice,kIOServicePlane,CFSTR(kIOBSDNameKey),kCFAllocatorDefault,kIORegistryIterateRecursively);
But i found that on Intel based devices that regestry search dosnt work.
I do have Vendor id, product id etc..
So my quistion:
Is there diffrent alternative?
Is there a syscall i can use?
Maybe diffrent approch and not use IoKit?
Thanks
I tried to register using Iokit to:
IOServiceMatching(kIOUSBHostDeviceClassName)
and
IOServiceMatching(kIOUSBInterfaceClassName);
and
IOServiceMatching(kIOUSBDeviceClassName);
First of all, there's no concept of "ejecting" in USB itself. Mounting, unmounting, ejecting, etc. are all storage device/volume concepts, and USB mass storage devices are just one type of storage device on which you can perform these operations.
You therefore need to look at the Disk Arbitration framework, specifically the DADiskEject function for ejecting. There is a certain mapping between I/O Kit and Disk Arbitration objects, but not all DADisk objects will necessarily have corresponding I/O Kit objects, as they also exist for APFS snapshot volumes, network mounts, etc. If looking for the device via I/O Kit is natural (for example because you're interested in a specific vendor+product ID pair) then you can easily find an IOMedia object's corresponding DADisk using DADiskCreateFromIOMedia.
For searching the I/O Kit registry for USB devices, use one of the documented matching dictionary formats.
For example, something like:
#{
#kIOProviderClassKey: #kIOUSBHostDeviceClassName,
#kUSBVendorID : #1234,
#kUSBProductID: #5678,
}

Difference between an API and a device driver

I try to understand how they both relate to each other. As far as I know, they both can be a part of the HAL. In case of a communication between an application and a graphics card - can an API get the job done on its own or do we have to rely on them both? Can an API directly communicate with the hardware or do we always need a driver in-between, which translates the command of the API?
TL;DR
Think of an API as a specification that describes what to do, while a driver is an implementation that describes how to do it.
Details
As a contrived example, imagine we have three different audio cards that we want to play nicely with multiple operating systems. We can define an API for the card manufacturers that says, "Each card must support four methods: mute(), playsound(sound), volumeup() and volumedown()". By defining the API, we get a common interface that allows the operating system designers to support the audio devices without worrying about the hardware details. They know that if they want to mute the sound card, they can call mute(), or if they want to turn the volume up, they call volumeup().
It is then up to the device manufacturers to implement a driver that actually performs those actions. The driver will vary between the three different audio cards because they are different at the hardware level, but the API is consistent so the next higher abstraction level (the OS) doesn't need to know how to deal with the hardware.
For a more concrete example, consider the Advanced Control & Power Interface (ACPI) specification. It defines a common interface for operating systems to manage power consumption and thermal characteristics of hardware devices. There are methods that a device driver or firmware must implement in order to be "ACPI Compliant". This allows Windows operating systems and Linux variants to both perform the same actions on hardware devices without needing to implement their own drivers for the hardware
Note: Windows performs ACPI actions through acpi.sys, which they call an "ACPI Driver". Don't let the terminology confuse you; even though they call it a driver, it is really a window into the ACPI interface. Linux uses the acpi kernel module to do the same thing, and Linux doesn't call it a driver. Perhaps ACPI wasn't the best example, but I don't have anything better at the moment.

Hacking computer hardware to do experiment control

I am a physicist, and I had a revelation a few weeks ago about how I might be able to use my personal computer to get much finer control over laboratory experiments than is typically the case. Before I ran off to try this out though, I wanted to check the feasibility with people who have more expertise than myself in such matters.
The idea is to use the i/o ports---VGA, ethernet, speaker jacks, etc.---on the computer to talk directly to the sensors and actuators in the experimental setup. E.g. cut open one side of an ethernet cable (with the other end attached to the computer) and send each line to a different device. I knew a postdoc who did something very similar using a BeagleBone. He wrote some assembly code that let him sync everything with the internal clock and used the GPIO pins to effectively give him a hybrid signal generator/scope that was completely programmable. It seems like the same thing should be possible with a laptop, and this would have the additional benefit that you can do data analysis from the same device.
The main potential difficulty that I foresee is that the hardware on a BeagleBone is designed with this sort of i/o in mind, whereas I expect the hardware on a laptop will probably be harder to control directly. I know for example (from some preliminary investigation, http://ask.metafilter.com/125812/Simple-USB-control-how-to-blink-an-LED-via-code) that USB ports will be difficult to access this way, and VGA is (according to VGA 15 pin port data read and write using Matlab) impossible. I haven't found anything about using other ports like ethernet or speaker jacks, though.
So the main question is: will this idea be feasible (without investing many months for each new variation of the hardware), and if so what type of i/o (ethernet, speaker jacks, etc.) is likely to be the best bet?
Auxiliary questions are:
Where can I find material to learn how I might go about executing this plan? I'm not even sure what keywords to plug in on Google.
Will the ease with which I can do this depend strongly on operating system or hardware brand?
The only cable I can think of for a pc that can get close to this would be a parallel printer cable which is pretty much gone away. It's a 25 wire cable that data is spread across so that it can send more data at the same time. I'm just not sure if you can target a specific line or if it's more of a left to right fill as data is sent.
To use one on a laptop today would definitely be difficult. You won't find any laptops with parallel ports. There are usb to parallel cables and serial to parallel cables but I would guess that the only control you would have it to the usb or serial interface and not the parallel.
As for Ethernet, you have 4 twisted pair with only 2 pair in use and 2 pair that are extra.
There's some hardware that available called Zwave that you might want to look into. Zwave will allow you to build a network of devices that communicate in a mesh. I'm not sure what kind of response time you need.
I actually just thought of something that might be a good solution. Check out security equipment. There's a lot of equipment available for pc's that monitor doors, windows, sensors, etc. That industry might what your looking for.
I think the easiest way would be to use the USB port as a Human Interface Device (HID) and using a custom built PIC program and a PIC that includes the USB functionality to encode the data to be sent to the computer and in that way be able to program it independently from the OS due to the fact that all mayor OS have the HID USB functionality.
Anyways if you used your MIC/VGA/HDMI whatever other port you still need a device to encode the data or transmit it, and another program inside the computer to decode that data being sent.
And remember that different hardware has different software (drivers) that might decode the raw data in other odd ways rendering your IO hardware dependent.
Hope this helps, but thats why the USB was invented in the first place to make it hardware and os independent.

How to write usb touchscreen driver kext in os x 10.9?

I want to write a usb touchscreen kext for usb touch screen .
I have read the Kernel Extension Programming Topics and the I/O Kit Fundamentals etc,
My question is,
1 . how to get the input report messages from touch screen ?
2 . how to post the coordinate info to system ?
I have no idea, anybody help?
It depends on the hardware; moreover, this question is quite broad - you'll need to be more specific in your question to get more specific answers. I'll try to provide a broad overview:
A touchscreen has 2 parts:
Output: showing the image coming from the computer on the display
Input: the touch events to feed back into the computer
As you haven't asked about (1) at all, I assume your device just plugs into a display port on the Mac and is already displaying correctly. If not, you'll want to look into the IOFramebuffer API.
For (2) - Pretty much all USB input devices are HID devices of some form. If you're new to HID in general, you'll probably want to read and understand the USB HID specification and related documentation as you'll be using that information throughout.
OSX already comes with comprehensive support for the standard HID device classes such as keyboards, mice, touchpads, graphics tablets, etc. If your device claims to be any kind of HID device, OSX should already be detecting it and attaching its generic HID driver to it. You should see a IOUSBHIDDriver instance in the I/O Registry (eg. using Apple's IORegistryExplorer tool, or ioreg on the command line).
I'd also expect your device to conform to HID's absolute pointing device profile, so at least single touches should already be working properly. If it's a multitouch device, or you need other extra features, you'll probably want to implement a IOUSBHIDDriver subclass that generates or converts the necessary multitouch events.
If your device for some reason isn't already a HID USB device, you'll need to write a custom USB driver for it, and convert the events coming from it into HID events, as the HID events are passed directly into userspace and processed there. You can actually write USB drivers and generate HID events from userspace, so you might be able to avoid writing any kernel code at all if you prefer.
Apple provides some documentation on HID:
The HID Class Device Interface Guide covers some general concepts and the userspace interfaces.
The Kernel Framework Reference has API documentation for the various IOHID* classes in the kernel.
If you're going to be writing your own kernel HID device driver, your best bet is probably the IOHIDFamily source code. You can probably also find some open source examples around the web.
Apple's USB mailing lists is probably also worth checking, both for the archives and if you have questions. The darwin-kernel and darwin-drivers lists are also relevant.

Programmable usb host to host controller

Further to this question, I'm looking for a device that will allow me to connect two USB hosts, while still being fully programmable. I would like something that can do the following:
Masquerade as an arbitrary USB device
Take input from a PC and do nothing but pass it on to the other host.
I've been looking for a microcontroller (preferably pre-assembled) that will allow me to do this, but have so far come up blank. Does anyone know of a controller (preferably cheap) that will allow me to do this?
Take input from a PC and do nothing but pass it on to the other host.
This is non-sensical from a USB perspective. USB is a host-based protocol: a device will never send data unless a host requests it first. Keep in mind here, 'host' and 'device' have specific meanings here within the protocol itself; you can think of a 'host' as the master and the 'device' as the slave. These roles are baked into a USB controller. There is no way to convince a standard USB controller in any given PC or peripheral to swap roles. There are add-in cards for PCs that are USB device controllers (making your PC act as a device), but 'cheap' is not a word I would use to describe them.
What you really are trying to do is create something that is a USB device to device bridge. So, alright, you need to have two USB(2.0) device controllers (maybe not that expensive, some micros already have to on-the-go controllers). Then you have to get them to pass something meaningful to each other. That's really hard because, as I mentioned above, hosts must tell a device to send data, and can send data to a device whenever it wants. Assuming a game controller shows up as a HID device (assuming the console doesn't listen for some weird, custom descriptor, and use some weird, custom protocol), interrupt pipes will be used to transfer data. This pipe is guaranteed to be polled at some minimum rate. So you have the console requesting data at some rate, which is not fixed, and a host-as-gamepad sending data at some rate. It's going to impossible for the two to sync up, so you'll need some kind of decent sized buffer on the gadget you're trying to create, which adds more $$ and more complexity.
USB is also pretty fast. In high-speed (USB2.0), frames are 125 microseconds long. That means you have to be completing requests at around 8KHz, which seems slow compared to the clock speed of a microcontoller, but keep in mind you have to be doing everything else at once. I'm not sure if there's a hobbyist-level microcontroller that's going to have everything you need, especially one for which you don't need to roll your own USB stack.
Try this chip -> FTDI 232 they are protocols chips, it will translate the data to i2c, spi, serial, whatever u want. nice, easy and cheap :) . FTDI firm, have even better ones (vinculum), with otg and everything u need but I would start with the FTDI232. U just need to use your favorite uC to do the work u want. ... on the other hand, u have to do a little board, maybe some soldering, :). good luck!
You will need $$$$ in equipment and $$$$$ in development work to achieve things the way you imagine. You should better tell us what do you want to emulate, and take a look here if someone has already done it for you. If not then use LUFA library with some bigger USB AVR that can behave as USB host and connect two of them (one as USB guest and the other as USB host) via some other protocol (I2C/SPI/UART).
In the meantime there is a great solution for this Problem. Using the FaceDancer Library together with one of the boards supported by this great piece of software (i.E. GreatFET One) you get exactly what you want:
The GreatFET One has two USB connectors: the first one is used to simulate ANY kind of USB Device, while the second one is used to forward all requests/responses received/send via the first connector.
Of course this tool requires that you know the USB protocol of the device you want to simulate. Although there are some code samples you still have to know what you are doing as soon as you customize them.