Executing a regular task on Scilab - automation

I'm currently working on the automation of a test bench (Subject of my work placement). It consists electronic loads (BK8610, BK8500), DC power supplies (BK9202, ETS60X14C) and multimeters (MetraEnergy).
I created a library of functions in Scilab to control all the devices at the same time (I can set paramaters and get measures using ethernet, serial ports and VISA interface). Now that every commands works fine, i'm trying to write a general script to do regular measures.
I thought of using tic() at the beginning of my script, then exec_time=toc() at the end an wait for 5s-exec_time before executing it again.
tic()
//My code
exec_time=toc()
//wait until the time is equal to 5s
//Repeat
Even if this could work, I wonder if Scilab has a function which would be equivalent to an asynchronous interruption timer (commonly used on ATmega, stm32 and so on) ? This would be much easier.
Hope this is clear. Thank you !

Maybe realtime & realtimeinit is a solution for you. Modified example from Scilab help:
clc;
clear;
realtimeinit(1/2);//sets time unit to half a second
realtime(0);//sets current date to 0
for k=1:10
realtime(k);
mprintf('\r\ncurrent time is %.1f sec',k/2);
end

Related

How to add delay in Micro secs in tcl file?

I am using when command in tcl file, and after the condition is met I want to wait for some microseconds. I have found after, but the delay we specify for after is in milliseconds; it is not taking decimal values.
So is there any other way to add short delay in tcl file?
There's no native operation for that. If it is critical, you could busy-loop looking at clock microseconds…
proc microsleep {micros} {
set expiry [expr {$micros + [clock microseconds]}]
while {[clock microseconds] < $expiry} {}
}
I don't really recommend doing this as it is not energy efficient; such high precision waiting is rarely required in my experience (unless you're working on an embedded system with realtime requirements, an area where Tcl isn't a perfect fit).
Of course, you can also make a C wrapper round a system call like nanosleep(), and that might or might not be a better choice (and might or might not be more efficient)…

Confusion with writing a game loop

I'm working on a 2D video game framework, and I've never written a game loop before. Most frameworks I've ever looked in to seem to implement both a draw and update methods.
For my project I implemented a loop that calls these 2 methods. I noticed with other frameworks, these methods don't always get called alternating. Some frameworks will have update run way more than draw does. Also, most of these types of frameworks will run at 60FPS. I figure I'll need some sort of sleep in here.
My question is, what is the best method for implementing this type of loop? Do I call draw then update, or vice versa? In my case, I'm writing a wrapper around SDL2, so maybe that library requires something to be setup in a certain way?
Here's some "pseudo" code I'm thinking of for the implementation.
loop do
clear_screen
draw
update
sleep(16.milliseconds)
break if window_is_closed
end
Though my project is being written in Crystal-Lang, I'm more looking for a general concept that could be applied to any language.
It depends what you want to achieve. Some games prefer the game logic to run more frequently than the frame rate (I believe Source games do this), for some games you may want the game logic to run less frequently (the only example of this I can think of is the servers of some multiplayer games, quite famously Overwatch).
It's important to consider as well that this is a question of resolution, not speed. A game with logic rate 120 and frame rate 60 is not necessarily running at x2 speed, any time critical operations within the game logic should be done relative to the clock*, not the tic rate, or your game will literally go into slow motion if the frames take too long to render.
I would recommend writing a loop like this:
loop do
time_until_update = (update_interval + time_of_last_update) - current_time
time_until_draw = (draw_interval + time_of_last_draw) - current_time
work_done = false
# Update the game if it's been enough time
if time_until_update <= 0
update
time_of_last_update = current_time
work_done = true
end
# Draw the screen if it's been enough time
if time_until_draw <= 0
clear_screen
draw
time_of_last_draw = current_time
work_done = true
end
# Nothing to do, sleep for the smallest period
if work_done == false
smaller = time_until_update
if time_until_draw < smaller
smaller = time_until_draw
end
sleep_for(smaller)
end
# Leave, maybe
break if window_is_closed
end
You don't want to wait for 16ms every frame otherwise you might end up over-waiting if the frame takes a non-trivial amount of time to complete. The work_done variable is so that we know whether or not the intervals we calculated at the start of the loop are still valid, we may have done 5ms of work, which would throw our sleeping completely off so in that scenario we go back around and calculate fresh values.
* You may want to abstractify the clock, using the clock directly can have some weird effects, for example if you save the game and you save the last time you used a magical power as a clock time, it will instantly come off cooldown when you load the save, as that is now minutes, hours or even days in the past. Similar issues exist with the process being suspended by the operating system.

measuring time between two rising edges in beaglebone

I am reading sensor output as square wave(0-5 volt) via oscilloscope. Now I want to measure frequency of one period with Beaglebone. So I should measure the time between two rising edges. However, I don't have any experience with working Beaglebone. Can you give some advices or sample codes about measuring time between rising edges?
How deterministic do you need this to be? If you can tolerate some inaccuracy, you can probably do it on the main Linux OS; if you want to be fancy pants, this seems like a potential use case for the BBB's PRU's (which I unfortunately haven't used so take this with substantial amounts of salt). I would expect you'd be able to write PRU code that just sits with an infinite outerloop and then inside that loop, start looping until it sees the pin shows 0, then starts looping until the pin shows 1 (this is the first rising edge), then starts counting until either the pin shows 0 again (this would then be the falling edge) or another loop to the next rising edge... either way, you could take the counter value and you should be able to directly convert that into time (the PRU is states as having fixed frequency for each instruction, and is a 200Mhz (50ns/instruction). Assuming your loop is something like
#starting with pin low
inner loop 1:
registerX = loadPin
increment counter
jump if zero registerX to inner loop 1
# pin is now high
inner loop 2:
registerX = loadPin
increment counter
jump if one registerX to inner loop 2
# pin is now low again
That should take 3 instructions per counter increment, so you can get the time as 3 * counter * 50 ns.
As suggested by Foon in his answer, the PRUs are a good fit for this task (although depending on your requirements it may be fine to use the ARM processor and standard GPIO). Please note that (as far as I know) both the regular GPIOs and the PRU inputs are based on 3.3V logic, and connecting a 5V signal might fry your board! You will need an additional component or circuit to convert from 5V to 3.3V.
I've written a basic example that measures timing between rising edges on the header pin P8.15 for my own purpose of measuring an engine's rpm. If you decide to use it, you should check the timing results against a known reference. It's about right but I haven't checked it carefully at all. It is implemented using PRU assembly and uses the pypruss python module to simplify interfacing.

Hyperopt set timeouts and modify space during execution

if someone can help on:
How to set a timeout for each individual test ? a timeout for the total experiment ?
How to setup a progressive strategy which would eliminate/prune a % of worst scoring branches of search space at different stage of the experiment (while using current optimization algorithms) ? ie. at 30% of the max total experiment, it could remove 50% of the worst scoring classifiers and all its branch of hyperparameters to remove it from upcoming tests. Then, same process at 60%...
Thanks a lot!
Following my exchange on hyperopt's github:
there is not a per-trial timeout but hyperopt-sklearn implements its own solution by just wrapping the function. Please look for "fn_with_timeout" at https://github.com/hyperopt/hyperopt-sklearn/ .
from issue 210: "the optimizers are stateless, and fmin stores all state of the experiment in the trials object. So if you remove some experiments from the trials object, it's as if they never happened. use fmin's "max_evals" parameter to interrupt search as often as you need to make these sorts of modifications. It should be fine to use repeated calls with e.g. max_evals increasing by 1 every time if you want really fine grained control."
Thanks for looking into this, #doxav. I've written some code that addresses question 1, taking part of fn_with_timeout from hyperopt-sklearn and adapting it for standard Hyperopt cost functions.
You can find it here:
https://gist.github.com/hunse/247d91d14aaa8f32b24533767353e35d

If passing a negative number to taskDelay function in vxworks, what happens?

Noted that the parameter of taskDelay is of type int, which means the number could be negative. Just wondering how the function is going to react when passing a negative number.
Most functions would validate the input, and just return early/return 0/set the parameter in question to a default value.
I presume there's no critical need to do this in production, and you probably have some code lying around that you could test with.... why not give it a go?
The documentation doesn't address it, and the only error codes they do define doesn't cover this case. The most correct answer therefore is that the results are undefined.
See the VxWorks / Tornado II FAQ for this gem, however:
taskDelay(-1) shows another bug in
the vxWorks timer/tick code. It has
the (side) effect of setting vxTicks
to zero. This corrupts the localtime
(and probably other things). In fact
taskDelay(x) will have the same effect
if vxTicks + x >= 0x100000000. If the
system clock rate is 100Hz this
happens after about 500 days (because
vxTicks wraps). At faster clock rates
it will happen sooner. Anyone trying
for several years uptime?
Oh there is an undocumented upper
limit on the clock rate. At rates
above 4294 select() will fail to
convert its 'usec' time into the
correct number of ticks. (From: David
Laight, dsl#tadpole.co.uk)
Assuming this bug is old, I would hope that it would either return an error or do the same thing as taskDelay(0), which puts your task at the end of the ready queue.
The task delay tick will be VIRTUALLY 10,9,..,1,0 for taskDelay(10).
The task delay tick will be VIRTUALLY -10,-11,...,-2147483648,2147483647,...,1,0 for taskDelay(-10).