Run a loop in LabVIEW for a set amount of time, periodically - while-loop

I'm using LabVIEW to operate and record data from a wastewater reactor. I currently have a program set up to monitor pH continuously, and then use pH data to turn on either an acid or base pump.
My problem is that I want to monitor and record pH data 24/7, but I only want my acid/base pumps to be activated for one hour, every three hours. Ideally, I'd like to tie these operating times to the computer's clock.
For example, from 10:05 am to 11:05 am, I want my acid and base pumps to use data from the pH sensor to either turn on or remain off depending on the pH measured. My goal pH is 7.0 +/- 0.3. For example, if the pH measured was 6.5, the base pump would turn on until a pH of 6.7 is reached. If the pH measured was 7.5, the acid pump would turn on until a pH of 7.3 was reached. If the pH was 7, both pumps would remain off. So far, my code does this, but pumps are turned on and off constantly.
At 11:05, both pumps would be "deactivated" and turn off, though pH measurement should continue. Then, 3 hours after the initial pump start time (3 hours after 10:05 am = 1:05 pm, or 2 hrs after the 11:05 am stop time) this cycle would start again, running again for one hour. I want this cycle to continue over and over (i.e. pumps on in response to pH measurements for 1 hr, every 3 hours).
Is it possible to do this in LabView? (I'm extremely new to LabVIEW also). Thanks!

Yes, it's certainly possible to do this.
The simplest way to achieve what you describe would be to add extra logic to the pump control code inside your loop. Each loop iteration, get the current time (e.g. with Get Date/Time in Seconds) and calculate whether the pumps should be enabled or not (you might find Quotient and Remainder useful). Then you could use an And function to enable each pump if both the pH calculation and the enabled-time calculation produce a True output.
I'd suggest using functions from the Programming palette for your greater than, less than, And, etc operations as they take less diagram space and are easier to understand than the Express functions in my opinion.
A more sophisticated and scalable approach might be to separate the pH measurement and the pump control into two different loops and use some mechanism to transfer the latest pH value into the pump control loop (a notifier, local variable, functional global or channel wire would all be options here). A state machine would then be a good pattern for the pump control logic.

Related

Detect breakdown voltage in an AC waveform

I need to monitor an AC Voltage waveform and record the RMS value when the breakdown happens. I roughly know how to acquire data from videos I have watched, however, it is difficult for me to produce a solution that reads the Breakdown Voltage Value. Ideally, I would also take a screenshot along with the breakdown voltage value,
In case you are not familiar with this topic, When a breakdown happens the voltage will drop immediately to zero. So what I need is to measure the voltage just before it falls to zero, and if possible take a screenshot. This is an image of a normal waveform (black) with a breakdown one (red).
Naive solution*:
Take the data and get the Y values (this would depend on the datatype you have, which would depend on how you acquire the data).
Find the breakdown point by iterating over the values and maintaining a couple of flags (I would probably say track "got higher than X" and once that's true, track "got lower than Y").
From that, I would just say take the last N points (Get Array Subset) and get the array max. Or just track the maximum value as you run.
Assuming you have the graph in a control, you can just right click and select Create>>Invoke Node>>Export Image.
I would suggest trying playing with that with a VI with static data which you can repeatedly run to check how your code behaves.
*I don't know the problem domain and an not overly familiar with the various analysis VIs that ship with LV, so there are quite possibly more efficient ways of doing this.

Define inflow/s with different depart speeds

I want the vehicles in the inflow to drive constant speed but I want this constant speed to change between episodes (let's make this speed uniformly sampled between 10 m/s and 25 m/s).
E.g. in episode #5 all vehicles drive in 12.3 m/s and in episode #6 all vehicles drive in 19.7 m/s.
How can I do that?
Can I do it with only one inflow or do I need an infinite number of inflows, one per speed and change dynamically between them? (which I'm not sure how, anyway)
Yes! It's a little tricky but you can make it work. If you look at the reset method on line 988 of the following file (https://github.com/flow-project/flow/blob/master/flow/envs/bottleneck.py) you can see that what we do is create a new set of inflows and then restart the simulation so that the new set of inflows is active. You should be able to add a similar bit of code to your environment to make it work.

Waiting time of SUMO

I am using sumo for traffic signal control, and want to optimize the phase to reduce some objectives. During the process, I use the traci module as an output of states in traffic junction. The confusing part is traci.lane.getWaitingTime.
I don't know how the waiting time is calculated and also after I use two detectors as an output to observe, I think it is too large.
Can someone explain how the waiting time is calculated in SUMO?
The waiting time essentially counts the number of seconds a vehicle has a speed of less than 0.1 m/s. In the case of traci.lane this means it is the number of (nearly) standing vehicles multiplied with the time step length (since traci.lane returns the values for the last step).

Compensating for laggy positive feedback

I'm trying to make a program run as accurately as possible while staying at a fixed frame rate. How do you do this?
Formally, I have some parameter b in [0,1] that I can set to determine how accurate my computations are (where 0 is least accurate, 0.5 is fairly accurate, and 1 is very accurate). The higher this is, the lower frame rate I will get.
However, there is a "lag", where after changing this parameter, the frame rate won't change until d milliseconds afterwards, where d can vary and is unknown.
Is there a way to change this parameter in a way that prevents "wiggling"? The problem is that if I am experiencing a low frame rate, if I increase the parameter then measure again, it will only be slightly higher, so I will need to increase it more, and then the framerate will be too slow, so I need to decrease the parameter, and I get this oscillating behavior. Is there a way to prevent this? I need to be as reactive as possible in doing this, because changing too slowly will cause the framerate to be incorrect for too long.
Looks like you need an adaptive feedback dampener. Trying an electrical circuit analogy :)
I'd first try to get more info about how the circuit's input signal and responsiveness look like. So I'd first make the algorithm update b not with the desired values but with the previous values plus or minus (as needed towards the desired value) a small fixed increment, say .01 instead (ignore the sloppy response time for now). While doing so I'd collect and plot/analyze the "desired" b values, looking for:
the general shape of the changes: smooth or rather "steppy" or "spiky"? (spiky would require a stronger dampening to prevent oscillations, steppy would require a weaker dampening to prevent lagging)
the maximum/typical/minimum changes in values from sample to sample
the distribution of the changes in values from sample to sample (I'd plan the algorithm to react best for changes in a typical range, say 20-80% range and consider acceptable lagging for changes higher than that or oscillations for values lower than that)
The end goal is to be able to obtain parameters for operating alternatively in 2 modes:
a high-speed tracking mode (also the system's initial mode)
a normal tracking mode
In high-speed tracking mode the b value updates can be either:
not dampened - the update value is the full desired value - only if the changes shape is not spiky and only in the 1st b update after entering the high-speed tracking mode. This would help reduce lagging.
dampened - the update delta is just a fraction (dampening factor) of the desired delta and reflects the fact that the effect of the previous b value update might not be completely reflected in the current frame rate due to d. Dampening helps preventing oscillations at the expense of potentially increasing lag (always conflicting requirements). The dampening factor would be higher for a smooth shape and smaller for a spiky shape.
Switching from high-speed tracking mode to normal tracking mode can be done when the delta between b's previous value and its desired value falls below a certain mode change threshold value (eventually maintained for a minimum number of consecutive samples). The mode change threshold value would be initially estimated from the info collected above and/or empirically determined.
In normal tracking mode the delta between b's previous value and its desired value remain below the mode change threshold value and is either ignored (no b update) or and update is made either with the desired value or some average one - tiny course corrections, keeping the frame rate practically constant, no dampening, no lagging, no oscillations.
When in normal tracking mode the delta between b's previous value and its desired value goes above the mode change threshold value the system switches again to the high-speed tracking mode.
I would also try to get a general idea about how the d response time looks like. To do that I'd change the algorithm to only update b with the desired values not at every iteration, but every n iterations apart (maybe even re-try for several n values). This should indicate how many sample periods would generally a b value change take to become fully effective and should be reflected in the dampening factor: the longer it takes for a change to take effect the stronger the dampening should be to prevent oscillations.
Of course, this is just the general idea, a lot of experimental trial/adjustment iterations may be required to reach a satisfactory solution.

SD driver - Write speed

We've been trying to figure out why we only achieve writing speed of ~53MBps on UHS104 cards that claim 90MBps.
Due to hardware constraints, clock frequency supplied to the card is only 148.5 MHz (instead of 208MHz).
Does that mean that we should achieve speed of (148.5 * 4)/8 = 74.25MBps?
Or is our caclulation wrong since it assumes that if card guarantees speed of 90MBps on frequency of 208MHz, then it should guarantee speed of 74.25MBps on frequency of 148.5?
The simplified physical layer spec states that for maximum performance you need to write full AU blocks - usually 2 or 4 MByte, otherwise the card will have to copy data around internally when writing across block boundaries. Unfortunately, most of the Speed Class Specification is missing in the 4.13 chapter.
The first AUs may have a different wear level strategy, as they are normally used for the FATs. This could make them slower to write to.