I'm recieving a phase modulated signal representing a periodic pulsed byte that looks something like this:
-------------|-|||--||-------------|||--||||-------------|--||||-|-------------
Im trying to understand how I might split this signal roughly halfway between each pulse of activity so I can save each pulse as a separate IQ fil, bit like this:
------|-|||--||------
-------|||--||||-------
------|--||||-|------
The split does not have to be precise, it only needs to avoid bisecting any of the pulses. Here are some thing I've tried.
Creating a custom block on the receiver that uses a timing interval. Unfortunately, the pulses aren't perfectly periodic so eventually this causes the split to drift.
Since I also control the transmitter, I tried sending an async pmt message from the transmitter to the receiver, but again, since the message is async the timing is too unpredictable to make an accurate enough cut.
I was also wondering if there is any way to propagate a tag from a transmitter to a receiver if both are in the same flow graph, but I couldn't find any info on how to do this.
Any Ideas? Thanks!
Related
I'm writing a program in LabVIEW 2014 in order to control a linear actuator. The program is very simple, it sets a speed and then runs the subVIs to move the actuator back and forth.
There is a case structure inside a while loop so it would stop when a desired number or iterations is reached. The problem is that the iteration count of the while loop occurs faster than the execution of the program inside the case structure, and therefore the program stops before all the cycles of movement have been completed.
send pulses subVI:
activate subVI:
I tried different time delays in different parts of the code, but none of that worked. I think that the issue is that the while loop iterations run faster than the code of the case structure and somehow I need to slow it down. Or maybe I'm wrong and it is a complete different thing.
Here is the link of the actuator documentation:
https://jp.optosigma.com/html/en_jp/software/motorize/manual_en/SRC-101_InstructionManual_Ver1_1_EN.pdf
Welcome to the fun and infuriating world of interfacing to serial instruments.
Each iteration of a LabVIEW loop can only complete once all the code inside the loop structure has completed, so it's not possible that 'the while loop iterations run faster than the code of the case structure'. There's nothing explicitly wrong with any of your code, but evidently it isn't doing what you expected it to. The way to approach developing an instrument driver is always to start with the simplest case (e.g. one single movement of your actuator), get that working, and build up from there.
The documentation for an instrument's serial interface is rarely perfect and yours is no exception, but it does tell us that
every command is acknowledged by a response, and
you should not send a new command until you have received the response from the previous command.
Your code to send commands and receive the response looks OK. A VISA Read operation will read bytes from the computer's serial buffer until either the number of bytes to read is reached, or a byte matching the termination char is read, or the timeout expires. The manual implies that the instrument's responses are followed by the CR and LF characters, and the default configuration of the serial port in LabVIEW is to terminate each read when an LF is received, so you shouldn't need a time delay between each write and the following read; the instrument's response will be received into the buffer by the OS, then your code will read it out and return it as soon as it hits the LF.
What isn't completely clear from the manual is how the instrument responds to the activation command, G: - does it
Return the acknowledgement immediately, then execute the movement: you can check whether the movement is finished using the status command !:, or
Execute the movement, then return the acknowledgement to show that it's finished.
I think it's 1. but that's the first thing I would check. Unless all your movements are completed in less than 500 ms then I think this is what is wrong here: your program receives the acknowledgement then moves straight on to send the next command, but the actuator is still moving and not ready. In this case you have two options:
add a time delay after the read, calculated to be long enough for the actuator move to finish - this would be easiest, but potentially unreliable
in a While loop after you have got the acknowledgement of the G: command, send the !: command and check the response until you get R for 'ready'. (Remember that the acknowledgement string you receive will also have the CRLF on the end.) Use a time delay in this loop so you don't bombard the instrument with status checks - maybe something like 200 to 1000 ms would be suitable.
If it's case 2. then you would also have two options:
configure your serial port with a read timeout long enough to cover the longest move operation, then the read operation will just block until the acknowledgement is received - again this is the quick and dirty way, or
configure a short timeout, say 1000 ms, and place the read in a While loop that repeats until the acknowledgement is received or too many timeouts have occurred. Note that a timeout is considered an error, so you will have to turn off automatic error handling for the VI and instead test the error wire out of the VISA Read, discard the timeout error and handle any other error yourself.
Just as a general tip, whenever you pass an error wire into a loop and out again, I would use a shift register. That way if one iteration generates an error, the next iteration will see that error and fail immediately, so (for example) if communication fails you don't have to wait for the read timeouts to expire multiple times before your code can exit.
You'll probably have to do some experimenting and referring to LabVIEW help to get this fully working but hopefully this is enough to get you going.
I've read the camunda doc, but I don't find anything about it.
I know it doesn't make sense throw something that nobody will catch, but is it possible?
https://docs.camunda.org/manual/7.7/reference/bpmn20/events/signal-events/
https://camunda.com/bpmn/reference/#events-signal
In the Business Process Model And Notation 2.0 specification(can be found at
https://www.omg.org/spec/BPMN/2.0/), P253, in the Table 10.89 - Intermediate Event Types in Normal Flow:
(Signal) This type of Event is used for sending or receiving Signals. A Signal is
for general communication within and across Process levels, across
Pools, and between Business Process Diagrams. A BPMN Signal is
similar to a signal flare that shot into the sky for anyone who might be
interested to notice and then react. Thus, there is a source of the Signal,
but no specific intended target.
Hope that helps.
Yes this is possible. You can model a throwing signal event when there are no receivers. The event will simply throw the signal and continue the normal flow (without anyone ever using the event).
In contrary to that, the catching signal events can not be used without a throwing signal event. If you use a catching signal event without a throwing signal event the process will stop at this event and will never be able to continue.
I am working on a project using a Zynq (Picozed devboard). The application is run bare-metal, uses lwIP TCP in RAW mode and basically behaves like this:
Receive a batch of data via Ethernet, which is stored in RAM.
Process the batch of data.
Send back the processed data via Ethernet.
The problem is, I need to measure the execution time of the processing part. However, running lwIP in RAW mode forces me to call tcp_fasttmr() and tcp_slowtmr() every 250/500 ms, which makes accurate measurement pretty hard. Whenever I'm not calling the tcp_tmr() functions for some time, I start repeatedly receiving error messages via UART ("unable to alloc pbuf in recv_handler"). It seems this is called from some ISR related to error handling, but I cannot really find the exact location.
My question is, how do I suspend the network functionality so I don't need to call tcp_tmr() periodically? I tried closing the connection and disabling the interface (netif_set_down()) and disabling the timer interrupt, but it still seems to have no effect on my problem.
I don't know anything about that devboard or the microcontroller on it but you should have an ethernetif.c (lwIP port) file which should contain the processing of an Ethernet receive interrupt or similar. This should be calling the lwIP function netif->input with a packet to process.
Disabling the interface won't stop this behaviour, it will just stop the higher level processing of the packet. If you are only timing how long the execution time is for debugging, you could try disabling the Ethernet receive interrupt and stop calling tcp_tmr until you have processed the packets.
From user view, the property of "may not transmit all of the data" is a trouble thing. That will cause handler calls more than one time(may be).
The free function async_write ensure handler call only once, but it requires caller must call it in sequence or the data written will be interleaving. For network application usage, this is more bad than handler be called more than once.
If user want to handler called only once and data written is correct, user need to to do something.
I want to ask is: why asio not just make socket::async_write_some transmit all data?
I want to ask is: why asio not just make socket::async_write_some
transmit all data?
Opposed to async_write, socket::async_write_some is lower-level method.
The OS network stack is designed with send buffers and receive buffers. This buffers are required to be limited with some amount of memory. When you send many data over socket, receiving side can be more slow than sending and/or there can be network speed issues.
This is the reason why socket send buffers are limited and as a result system's syscalls like write or writev should be able to notify user program that system cannot accept chunk of data right now. With socket in async mode its even more critical. So, socket syscalls cannot work in async manner without signaling program to hold on.
So, the async_write_some as a mid-level wrapper to writev is required to support partial writes. In other hand async_write is composed operation and can call async_write_some many times in order to send buffers until operation is complete or possibly failed. It calls completion handler only once, not for each chunk of data passed to network stack.
If user want to handler called only once and data written is correct,
user need to to do something.
Nothing special, just to use async_write, not socket::async_write_some.
I have an app which continuously reads status updates from a server connection.
All is working well with a stream delegate to handle all the reading and writing asynchronously.
There's no part of the app that is "waiting" for a specific response from the server, it is just continuously handling status updates as they sporadically arrive from the socket. There are no requests on the client side that are waiting for responses.
I'm wondering what the best practice would be for the network activity indicator in this case.
I could turn it on in the stream event handler, and off before we leave the handler, but that would be a very short time (just enough for an non-blocking read or write to occur). Trying this, I only see the faintest flicker of the indicator; it needs to be on longer than just during the event handler.
What about turning it on in the stream delegate, and setting a timer to turn it off a short time later? (This would ensure it's on long enough to be seen, rather than the short time spent in the stream delegate.)
Note: I've tried this last idea: turning on the network activity indicator whenever there's stream activity, and note the NSDate; then in a timer (that I have fired every 1 second), if the time passsed is >.5 second, I turn off the indicator. This seems to give a reasonable indication of network activity.
Any better recommendations?
If the network activity is continuous then it sounds like it might be somewhat annoying to the user, especially if it's turning on and off all the time.
Perhaps better would be to test for lack-of-response up to a certain timeout value and then display an alert view to the user if you aren't getting any response from the server. Even that could be optional if you can provide feedback (like "Last update: 5 mins ago") to the user instead.