Simulink: Controlling model from external process - process

What is the easiest way (without code generation) to control a running model's inputs from an external source (say, an external process connected via stdin or a TCP/IP socket)? Is there a generic way to set this up in a way that does not require injecting explicit TCP/IP receive blocks in the model?

Related

Find placeholder and other tensors in remote graph

I'm running a distributed graph in a TensorFlow cluster, and wish to re-connect with this from a client as part of a prediction service. The only way I've found so far to make this work is via queues for the client's input and output using shared_name.
In principle named variables could be used for this as well, but is there any other way, using e.g. placeholders?
I've tried exploring the graph from the client's session but it only seems to contain nodes added by the client itself.

How to send USB control transaction on nonzero endpoint (libusb)?

I'm writing code to learn about the USB peripheral on a Freescale Kinetis microcontroller. I've managed to get through enumeration on a Linux host, and I can send & receive packets using vendor-custom codes on EP0, interacting with a libusb test program.
It looks like I can configure additional control endpoints (non-zero endpoint numbers) on the microcontroller, but I don't see a way to make libusb send / receive control transfers to those endpoints. (libusb_control_transfer doesn't require an endpoint number, though libusb_bulk_transfer and libusb_interrupt_transfer do.)
Are non-zero control endpoints so uncommon or unnecessary that it's not worth bothering with them? Is there some way to get libusb to execute control transactions to non-zero endpoints?
Is there some way to get libusb to execute control transactions to non-zero endpoints?
You can try to modify the endpoint field in the libusb_transfer structure of the asynchronous I/O API.
But it would surprise me if your microcontroller could actually support non-zero control endpoint(s) - not that many do.
In practise you would rather use either interrupt or bulk endpoints. Both have less overhead - allowing higher throughput with bulk transfers (see for example USB 2.0 SPEC Table 5-2 vs. Table 5-9).

What is the use of multiple control endpoints (non-EP0)?

I learned on OSDev wiki that Endpoint 0 is the default control pipe, allowing for bi-directional control transfers. This is used for device configuration, e.g. to retrieve device descriptors. The USB 2.0 spec explains this more thorougly in section 5.5 Control Transfers.
There are also a limited amount of endpoints available (2 for low-speed, 15 for full- and high-speed devices). Somewhere in the USB 2.0 spec, I have read that there must be at least one control pipe. This implies that there may be multiple control endpoints, but what is the use of it? Do you know any particular USB device or class that has an EP configured as control pipe?
Later, I found this in the spec, section 10.1.2 Control Mechanisms:
A particular USB device may allow the use of additional message pipes
to transfer device-specific control information. These pipes use the
same communications protocol as the default pipe, but the information
transferred is specific to the USB device and is not standardized by
the USB Specification.
If I understand it correctly, this means that non-EP0 cannot be used to configure the device (say, a standard request such as GET_DESCRIPTOR). But the setup/data/status stages seem still to be available ("[..] use the same communications protocol [..]"). Is this correct? Or is the use of standard/class requests forbidden for non-EP0?
Background: while working on an emulated USB device in QEMU, the need for a USB monitor for debugging purposes appeared. During inspection of the QEMU core USB code, I noticed that it only processed control commands for EP0. Other endpoints would be treated as data. There are some virtual devices (host-libusb) that always reject control transfers for those other endpoints. Hence the question whether this is the correct behavior or not (and if valid, whether there exist devices that really implement this).
As far as I can tell, there is no use for a non-EP0 control endpoint. I have developed several products that use custom control transfers on endpoint 0 as the main way to send device-specific requests and I have not encountered any fundamental problems with doing that.
If you did make a non-EP0 control endpoint I think your understanding is correct; you wouldn't be able to use it for standard requests but you would be able to use it for custom requests and the transaction sequences would be the same as on EP0.

What is the most efficient and concurrent way to implement a public API for an existing command-line program?

An existing command-line program generates some output when called with some parameters, and then exits.
I would like to modify this program to run on an event loop, and listen through a public API (could be in same machine). There seem to be multiple ways of implementing this:
make the API external to the program and do system calls
turn the program into a library, and include the necessary functionality into the API itself
local sockets (like a TCP line server, for eg.)
HTTP server (producing JSON, XML, etc)
Considering efficiency under load, concurrency and scalability, what would be the best approach?
Api-design also depends on the clients, maybe you can give some more information:
kind of client (real user, technical client polling things, etc.)
trust-level of clients (open to anyone, closed set of clients to be authenticated)
Regarding scalability requirements:
how many users
expected load (peak + average)
Another idea could be a simple ssh-server so anyone can execute the script from outside:
$ ssh user#yourhost.com yourCLIProgram.sh -param1 value1 -param2 value2
Then you don't need a overhead of event-listening loop.
What does the program do in more detail?

Is there way to tell terminal wait before send more data?

I have embedded firmware which have terminal over serial transmission. I am doing command from terminal which waits data (text file) which it should save to flash chip. However, writing flash is much slower than terminal transmission.
Text file may be pretty big (many kB), so in small embedded environment I cannot simply dump it to RAM. I though if it possible to communicate with standard terminal emulator (which have drag/dop support for files) to pause transmission every time when write buffer is full and tell continue again after write is done? I haven't find anything which may help me throught this.
Well, offcourse I can make PC frontend which understands this trick, but in basic level it should be nice if all function can be used through normal terminal if needed.
For a basic serial connection you could see if the hardware supports flow control. This would be the CTS, RTS lines (clear to send, request to send).
http://en.wikipedia.org/wiki/RS-232_RTS/CTS#RTS.2FCTS_handshaking
However many simple embedded systems do not implement this type of flow control.
If the hardware does not support flow control, then you will have to look at using some form of software flow control. You maybe able to implement the Xon/Xoff flow control ( http://en.wikipedia.org/wiki/XON/XOFF ) or could implement a simple file transfer protocol, like XMODEM, or ZMODEM, or even tftp. This depends on what your terminal can support.
I always use XMODEM when programming data into FLASH via a serial link from a PC. When using XMODEM it only sends one data packet at a time and waits for you to acknowledge the packet before sending the next one.
This means we control the flow via software on the receiving side:
Get packet ->
Write packet ->
Ack packet ->
Repeat util done...
XMODEM can be implemented on the smallest of devices (less than 1K RAM) and the code is very simple. All serial terminals support XMODEM (upto windows XP ship with an XMODEM capable terminal). XMODEM requires no special hardware.
Here is the spec.
Here is an example implementation.