Flexray replay block in Canoe - block

I created a network in Simulation setup(attached below), it has only a FlexRay replay block(loaded with .blf file) and the FlexRay cluster(loaded with the FlexRay database). Only Flexray channel 1 is in use. ECU is kept in deactivated state.
Setup
I just want to replay the signals from my measurement file on the FlexRay bus and check the signals on the Trace window. But I'm unable to and getting the following error (highlighted).
Error
I have attached flexray replay block configuration below:
Block configuration
Block configuration
Block configuration
I have following questions:
What is the reason for this error ? How can I resolve it ?
Do I need to make any changes in the replay block configuration? Could I get more information regarding this block (there is enough information for CAN replay block but not flexray block).
I would like to send few signals on the bus. The flexray replay block doesn’t have filter signal option. How can I achieve that ?
I request you to support me in resolving these issues. It would be really helpful.

Related

Boost ASIO and file descriptor reuse

I have multi-threaded (linux) server that registers async_writes and async_reads on the same native file descriptor through a socket object. I noticed under very heavy load when the server was dropping connections, on a very rare occasion a client would receive a garbled first message.
Tracking it down, the async_read detects an error on the socket and closes the socket. This closes the native file descriptor. If that file descriptor is reused before the original async_write has a chance to fire, it will find its native file descriptor valid and proceed to send its message (which is really a message from a previous session).
The only way I could see to fix this was to make the the async_read and async_write callbacks know if there were other callbacks registered and only close the socket if it were the last one.
Has anyone seen this issue?
Haven't seen it but it sounds plausible. Although I am surprised to see a new native file descriptor getting the exact same number than a recently closed descriptor.
You might want to put the socket in a shared_ptr and query shared_ptr::is_unique in both async_read and async_write. That'd be the easiest way to let the other callback know if both callbacks are registered. If is_unique is true you can be sure that no one else is still using this socket and can close it.
So if the connection gets dropped, async_read can check is_unique. If it is true, close the socket. And let go of the shared_ptr in either case.
Then, when async_write also fires it will find is_unique true and can close the socket, unless async_read has already closed it.
The only drawback is of course: async_write has to fire also (perhaps with an error code) in order to close the socket.
Oh I've seen exactly this in production code. (Much fun: we would be talking a proprietary protocol on a TCP socket to mysql server). The problem is when some thread "handles" (mis-handles) errors by closing sockets using the native handle (fd). Don't. Use shutdown (perhaps with cancel) instead and let the destructor take care of close. Of course, the real problem is the non-owning copies of the handle (fd) that are the cause of the resource race.
Critical Note:
Tracking it down, the async_read detects an error on the socket and closes the socket. This closes the native file descriptor
That's patently UNTRUE for Asio itself. Perhaps you have (third-party) code in the completion handlers doing that, but as I mention above, you cannot afford to do that.

Suspend operation of lwIP Raw API

I am working on a project using a Zynq (Picozed devboard). The application is run bare-metal, uses lwIP TCP in RAW mode and basically behaves like this:
Receive a batch of data via Ethernet, which is stored in RAM.
Process the batch of data.
Send back the processed data via Ethernet.
The problem is, I need to measure the execution time of the processing part. However, running lwIP in RAW mode forces me to call tcp_fasttmr() and tcp_slowtmr() every 250/500 ms, which makes accurate measurement pretty hard. Whenever I'm not calling the tcp_tmr() functions for some time, I start repeatedly receiving error messages via UART ("unable to alloc pbuf in recv_handler"). It seems this is called from some ISR related to error handling, but I cannot really find the exact location.
My question is, how do I suspend the network functionality so I don't need to call tcp_tmr() periodically? I tried closing the connection and disabling the interface (netif_set_down()) and disabling the timer interrupt, but it still seems to have no effect on my problem.
I don't know anything about that devboard or the microcontroller on it but you should have an ethernetif.c (lwIP port) file which should contain the processing of an Ethernet receive interrupt or similar. This should be calling the lwIP function netif->input with a packet to process.
Disabling the interface won't stop this behaviour, it will just stop the higher level processing of the packet. If you are only timing how long the execution time is for debugging, you could try disabling the Ethernet receive interrupt and stop calling tcp_tmr until you have processed the packets.

CAN error counters and interrupts

I'm using the bxCAN peripheral of an STMF3 uC in an environment where
1.) it is essential that the node is detached from the network once the REC/TEC has reached the warning level (waiting for the bus-off condition is not an option)
2.) the baud rate of the host network is unknown
3.) the connection might be sporadic as the node is connected by the user
Due to 1.) the STM32 HAL CAN driver is used in IT mode and whenever the called with the EWG flag set, the error callback shuts down the transceiver and deinitializes the bxCAN. In case the REC is over the limit, it is easily recovered by configuring the bxCAN in silent mode, assuming there is traffic on the CAN. However, if the TEC is over the limit, the bxCAN won't be able to transmit an other frame as the error interrupt will be instantly triggered once enabled -> there we are in a deadlock.
I tried decrementing the TEC by transmitting frames in silent loopback mode but successful transmissions do not affect the TEC in this mode it seems.
I suppose the question is not specific to this peripheral but valid for other CAN implementations.
Any suggestions are welcome.
I have implemented a work-around that seems to work fine, with the following requirements:
1.) whenever the CAN error ISR is triggered, it disconnects the node from the bus (the transceiver is powered off)
2.) not all interrupt sources are enabled, only the ones that are of higher severity than the last error state (e.g. in PASSIVE state the WARNING and PASSIVE interrupts are disabled and the BUSOFF interrupt is enabled)
3.) the last error state and thus the interrupt sources are updated whenever a.) an error ISR is triggered or b.) polling the CAN peripheral with a high frequency shows change in the error state
4.) whenever attempting a connection to the bus the REC must heal in listen-only mode first. For this, traffic is required on the bus.
With these requirements implemented the node is able to fail silently but recover to normal operation.

When do the events emitted by Port get emitted? And what do they mean?

As far as I can tell there are 7 events dispatched by a NoFlo port:
attach,
connect,
begingroup,
data,
endgroup,
disconnect,
detach
To me some of these events sound very similar such as attach + connect, and disconnect + detach. What is the difference?
What does begingroup and endgroup mean?
When do these events get emitted and when are they generally used?
I've seen the documentation at: http://noflojs.org/documentation/components/#portevents
Would my assumption be correct to assume that attach and detach are for handling NoFlo UI cases eg changing the state of the components look?
Another assumption would be that connect gets fired every time before data is sent? Then data gets fired. Then disconnect? Seems a bit odd to me...
I'm completely in the dark when it comes to groups.
attach and detach happen when the NoFlo Network attaches (or removes) a socket to the port. So usually they happen at network start-up time, before IIPs get sent.
The exception to this is when you're live-editing the graph with a tool like Flowhub. In that situation attach/detach can happen whenever you connect or remove wires.
Most components don't need to care about the attachment events.
connect happens before the upstream connection sends data, and disconnect when the upstream connection says that it has sent everything it is intending to send. So in effect they're beginning of transmission and end of transmission events. An upstream component may choose to connect again after a disconnect if it has a new batch of data to send.
data is the event for actual payload-containing packets.
begingroup and endgroup are the "bracket IPs" containing metadata about the data being sent. They can be used for creating tree structures with packet data.
For example, filesystem/ReadFile will send the file contents as a data packet, but the filename is sent via a bracket IP using a begingroup/endgroup packets around the actual file contents.
The noflo-groups library provides lots of components for utilizing group information for synchronization, routing, etc.

Using NServiceBus, how to check if the bus is ready?

If I try to send a message (using bus.Send<SomeCommand>(...) for example), and the endpoint is not available for some reason (perhaps it has not been set up yet), I get an exception similar to the following
The destination queue 'myqueue#mycomputer' could not be found. You may
have misconfigured the destination for this kind of message
(MyCommands.SomeCommand) in the MessageEndpointMappings of the
UnicastBusConfig section in your configuration file. It may also be
the case that the given queue just hasn't been created yet, or has
been deleted.
I would like to avoid this by checking ahead of time (on app startup) that the bus is ready and the endpoint is reachable.
Short of catching exceptions on every bus.Send, is there some other way to determine the health of the bus? I'm looking for something like bus.IsReady() but I can't find it.