How does usb gadget knows the host expects IN transfer? - usb

I am developing simple loopback using gadgetfs, but I am a bit
confused as to how gadgefs knows that host initiated IN transfer.
Gadgetfs use read/write on endpoints, so as to my understanding, it can only:
When using "read" on OUT endpoint file descriptor - accept a new transfer from host
to device.
When using "write" on IN endpoint file descriptor - start transfer from device to host .
(1) above seems to be simple to understand, but I have
misunderstanding about (2):
Isn't it that a write into IN endpoint should be accepted only when
host initiated a transaction (according to usb standard) ?
If so, than how does gadget knows that host initiated a
transaction in the IN endpoint, and expects a transfer at this moment ?

Gadget will have a USB device controller which handles all the requests from the USB host controller. So the job of the GadgetFS is to fill the Endpoint buffers with the help of device controller driver. Following is the sequence of events -
Application running in USB gadget has some data to transfer to host
Application uses GadgetFS interface to transfer the data
GadgetFS then uses standard USB device controller driver API to send the data to the controller
USB device controller driver take the buffer address passed down by gadgetFS and add it in Asynchronous list of the targeted controller
Endpoint(EHCI controller)
When device controller received "IN" token request from the controller, your device controller will read the EP details from the
token and schedule the corresponding EP for data transfer.
Controller DMA then reads the data from the address of the buffer which was added in step 4
This is the overall steps. You can check the controller spec for more details.
These steps are more or less same for EHCI and XHCI.
Remember that all the transactions are taken care of by the device controller and the application/GadgetFS has a job to fill up the buffers pointed by EP.

Related

Multiple MQTT connections on a single IOT device

Using the azure-iot-sdk for python I have a program that opens a connection to the IoT Hub and continually listens for direct methods, using the MQTT protocol. This is working as expected. I have a second python program that I invoke from cron hourly, that connects to the IoT Hub and updates the device twin for my device. Again this is using MQTT. Everything is working fine.
However I've come across in the documentation that a device can only have one MQTT connection at a time and the second will drop cause the first to drop. I'm not seeing this, however is what I'm doing unsupported?
Should I have a single program doing both tasks and sharing a single connection?
Yes that is correct, you can't have more than one connection with the same device ID to the IoTHub. Eventually in time you will have inconsistency behaviors and that scenario is unsupported. You should use a single program with a unique device ID doing both tasks.
Depending on the scenario you may want to consider using an iothubowner connection string to do service side operations like manage your IoT hub, and optionally send messages, schedule jobs, invoke direct methods, or send desired property updates to your IoT devices or modules.

Azure IoT Edge as Transparent Gateway - add Gateway name as property to messages?

I'm using Azure IoT Edge in transparent gateway mode. Now I would like to add a property to any message from leaf devices that gets passed through the Edge gateway, that basically contains the Edge device id (or its hostname etc). Thus allowing to trace the message flow.
Is this somehow possible? I already tried to put a custom module in between which I would just route all messages through:
"fromRaw": "FROM /messages/* WHERE NOT IS_DEFINED($connectionModuleId) INTO BrokeredEndpoint(\"/modules/taggingmodule/inputs/input1\")",
"intoUpstream": "FROM /messages/modules/taggingmodule/* INTO $upstream"
But doing this I lose the "transparent" message-passing. Any messages that are then received in the cloud IoT Hub appear to come from the Edge device instead of the leaf device.
In this way, it is not what the transparent gateway was designed.(Ref: here.)
The gateway simply passes communications between the devices and IoT
Hub. The devices are unaware that they are communicating with the
cloud via a gateway and a user interacting with the devices in IoT Hub
is unaware of the intermediate gateway device.
To workaround this issue, there are two options:
Based on what you did for now, embed additional information in messages that generated on the leaf device to identify where the message comes from.
Remove custom module in the transparent gateway. Add Edge device id (or its hostname etc) in leaf device message.

how to register for a Packet-in-message change-event-notfication?

I am trying to get a notification (over REST connection) when any host is trying to communicate in my network. I registered for a Packet-in-message change-event-notification that I found in the packet-processing module. when I start listening using my websocket client, I receive nothing!!!!!
I was expecting some notification with each packet-in reaching the controller. am I miss-understanding the use of this module? what is the purpose of this packet-in-message?
Is there a way to get a notification when any host is trying to set a communication with another host (over REST)?
I built my topology in mininet. It contains some openflow switches and hosts. The Opendaylight controller has l2switching, restconf, openflowplugin and dlux features enabled.

Two Xbee in API mode - Python

First, I tested the communication of 2 XBee (series 2) in AT mode and all worked correctly.
Then I changed the Coordinator to API mode and ran the below script while the router was in AT mode. I was successful and received the routers message. However, I can't get the router to be in API mode and send messages to the coordinator. I'm not sure if i can just do simple send command or if I need to specify the address or if the fames have to be formatted.
Each xbee is connected to a PC. I'm using python 3.4.
Coordinator in API mode to receive messages:
Continuously read the serial port and process IO data received from a remote XBee.
from xbee import XBee,ZigBee
import serial
ser = serial.Serial('/dev/...', 9600)
xbee = ZigBee(ser)
while True:
try:
response = xbee.wait_read_frame()
print(response)
except KeyboardInterrupt:
break
ser.close()
Has someone else done this or know of a site that could help me explain how the router in API works? All I want to do is to send messages from the router to the coordinator.
API mode works the same whether the device is configured as coordinator, router or end device. If your router is always sending data to the coordinator, there's no need to run it in API mode -- just keep it in AT mode with DH and DL set to 0. It will automatically send frames to the coordinator containing all data that comes in on the serial port.
If you need to use API mode on the router for some reason, just use the python-xbee library you're already using on the coordinator.
To communicate in API mode, you must send frame.
A frame is compose by a header and a footer.
There is some library to help you communicate in API
http://ftp1.digi.com/support/utilities/digi_apiframes2.htm
this web site show you how to communicate in API

what is the process of close a instance on openstack?

On openstack cloud plantform ,If I want to close a instance on compute node, what does the openstack do? can you tell me the process?
I assume by close you mean "terminate".
When terminating an instance the running virtual machine with an instance id of X is shut down and removed from the physical host it exists on.
The nova client query for this would be:
nova delete <instance-id> or something to that effect.
When you make that query the python-novaclient is interfacing with its own internal API to reach out to the nova-api RESTful API. It authenticates itself with an auth token in the http header of its query. Then nova-api interprets the instance termination request. It will verify any ACLs it needs to against keystone. And then it will perform necessary methods to shut down and remove the instance freeing up resources for future instances. It will then return a result.
Going deeper the scheduler will send out requests over the messaging system as a result of the nova-api queries. Those messages will be received by the targeted physical hosts. There nova-compute will interpret the request to delete the instance and it will perform its own local necessary tasks. Usually this involves interfacing with libvirt to shut down and free the instance resource. After this is completed or failed it will respond to the messaging bus the status. And the API will eventually get that message back and send it on to the user who initially requested the action.