how to suspend for 200 ticks while delay 400 ticks in vxworks - vxworks

I'm trying to code a program in vxworks. When a task total delay is 400 ticks, it was suspended at the 100th tick for 20 ticks, then resume to delay.
My main code is like the following:
void DelaySuspend (int level)
{
int tid, suspend_start,suspend_end,i;
suspend_start = vxTicks + 100;
suspend_end = vxTicks + 120;
i = vxTicks;
/* myfunction has taskDelay(400)*/
tid = taskSpawn("tMytask",200,0,2000,(FUNCPTR)myfunction,0,0,0,0,0,0,0,0,0,0);
/* tick between vxTicks+100 and vxTicks+120,suspend tMytask*/
while (i<suspend_start)
{
i=tickGet();
}
while (i <= suspend_end &&i >= suspend_start)
{
i = tickGet();
taskSuspend(tid);
}
}
What I want is to verify total delay time(or tick) doesn't change even I suspend the task for some time. I know the answer but just try to program it to show how vxWorks does it.

I am still not 100% clear on what you are trying to do, but calling taskSuspend in a loop like that isn't going to suspend the task any more. I am guessing you want something like this:
void DelaySuspend (int level)
{
int tid, suspend_start,suspend_end,i;
suspend_start = vxTicks + 100;
suspend_end = vxTicks + 120;
i = vxTicks;
/* myfunction has taskDelay(400)*/
tid = taskSpawn("tMytask",200,0,2000,(FUNCPTR)myfunction,0,0,0,0,0,0,0,0,0,0);
/* tick between vxTicks+100 and vxTicks+120,suspend tMytask*/
while (i<suspend_start)
{
i=tickGet();
}
taskSuspend(tid);
while (i <= suspend_end &&i >= suspend_start)
{
i = tickGet();
}
}
I just pulled the taskSuspend out of the loop, maybe you also want a taskResume in there after the loop or something? I am not sure what you are attempting to accomplish.
Whatever the case, there are probably better ways to do whatever you want, in general using taskSuspend is a bad idea because you have no idea what the task is doing when you suspend it. So for example if the suspended task is doing File I/O when you suspend it, and it has the file system mutex, then you cannot do any file I/O until you resume that task...
In general it is much better to block on a taskDelay/semaphore/mutex/message queue than use taskSuspend. I understand that this is just a test, and as such doing this may be ok, but if this test becomes production code, then you are asking for problems.

Related

Unit testing infinite Flux speed of generation

I have Flux, which generates event in some pace (in infinite manner). I would like to use StepVerifier that after 5 seconds there is at least 2 events generated. How can I verify this behavior using StepVerifier?
Sample flux for testing can look like this:
public fluxTest() {
final AtomicLong counter = new AtomicLong(0);
final Random rnd = new Random();
final Flux<String> randomIntervalEmitter = Flux.generate(generator -> {
try {
final long counterDivided = counter.get() % 12;
if (counterDivided > 0) {
TimeUnit.SECONDS.sleep(rnd.nextInt(1, 10));
} else {
TimeUnit.MILLISECONDS.sleep(rnd.nextInt(1, 50));
}
generator.next("asdf " + counterDivided);
} catch (InterruptedException e) {
e.printStackTrace();
}
});
final Flux<String> regularDummyUpdate = Flux.interval(Duration.ofSeconds(5)).map(e -> "" + (88 + (System.currentTimeMillis() % 104)));
final Flux<String> stringFluxWithSomePace = randomIntervalEmitter.mergeWith(regularDummyUpdate);
stringFluxWithSomePace.subscribe(System.out::println);
}
As stated in javadoc of StepVerifier:
The verification must be triggered after the terminal expectations
(completion, error, cancellation) have been declared, by calling one
of the verify() methods.
So when I have infinite source, I need to trigger some terminal operation. Complete and Error is emitted from source (I am not changing it in any way for tests) and Cancel can be triggered by subscriber (StepVerifier). So using thenCancel() and verify() on StepVerifier, I can achieve to assert only on the beginning of the infinite stream.
final var sv = StepVerifier.create(stringFluxWithSomePace.log())
// in 23 seconds should at least some "heartbeat" come at least three times
.expectSubscription()
.expectNextCount(3)
.thenCancel()
.verify(Duration.ofSeconds(23));
Nice thing is, that after receiving any 3 items, stream is cancelled and test finished (in my case usually under a second).

How to simulate output delay using next_trigger() in SystemC?

I have been reading this upvoted answer on Stack Overflow: https://stackoverflow.com/a/26129960/12311164
It says that replacing wait(delay, units); in SC_THREAD to next_trigger(delay, units) in SC_METHOD works.
But when I tried, it does not work. I am trying to build adder module with 2 ns output delay. Instead of having a 2 ns output delay, the adder output is getting updated every 2 ns.
Design:
#include "systemc.h"
#define WIDTH 4
SC_MODULE(adder) {
sc_in<sc_uint<WIDTH> > A, B;
sc_out<sc_uint<WIDTH> > OUT;
void add(){
sc_time t1 = sc_time_stamp();
int current_time = t1.value();
int intermediate = A.read() + B.read();
next_trigger(2, SC_NS);
OUT.write(intermediate);
cout << " SC_METHOD add triggered at "<<sc_time_stamp() <<endl;
}
SC_CTOR(adder){
SC_METHOD(add);
sensitive << A << B;
}
};
I know how to simulate delay using 2 techniques: sc_event and SC_METHOD and the wait statement in SC_THREAD, but I would like to simulate the delay using next_trigger(). I have read the Language Reference Manual, but could not figure how to do it.
Simulated on EDA Playground here: https://edaplayground.com/x/dFzc
I think I need to trigger 2 NS after the inputs change, how to do that?
You will have to track state manually:
sc_uint<WIDTH> intermediate;
void add(){
if (A->event() || B->event() || sc_delta_count() == 0) {
intermediate = A.read() + B.read();
next_trigger(2, SC_NS);
} else {
OUT->write(intermediate);
}
}
The problem is that using next_trigger doesn't magically transform your SC_METHOD into SC_THREAD. In general, I find any usage of next_trigger inconvenient and there are better ways of doing this using sc_event.

How to intentionally consume portion of CPU [duplicate]

How could I generate steady CPU load in C#, lower than 100% for a certain time? I would also like to be able to change the load amount after a certain period of time. How do you recommend to generate usage spikes for a very short time?
First off, you have to understand that CPU usage is always an average over a certain time. At any given time, the CPU is either working or it is not. The CPU is never 40% working.
We can, however, simulate a 40% load over say a second by having the CPU work for 0.4 seconds and sleep 0.6 seconds. That gives an average utilization of 40% over that second.
Cutting it down to smaller than one second, say 100 millisecond chunks should give even more stable utilization.
The following method will take an argument that is desired utilization and then utilize a single CPU/core to that degree:
public static void ConsumeCPU(int percentage)
{
if (percentage < 0 || percentage > 100)
throw new ArgumentException("percentage");
Stopwatch watch = new Stopwatch();
watch.Start();
while (true)
{
// Make the loop go on for "percentage" milliseconds then sleep the
// remaining percentage milliseconds. So 40% utilization means work 40ms and sleep 60ms
if (watch.ElapsedMilliseconds > percentage)
{
Thread.Sleep(100 - percentage);
watch.Reset();
watch.Start();
}
}
}
I'm using a stopwatch here because it is more accurate than the the TickCount property, but you could likewise use that and use subtraction to check if you've run long enough.
Two things to keep in mind:
on multi core systems, you will have to spawn one thread for each core. Otherwise, you'll see only one CPU/core being exercised giving roughly "percentage/number-of-cores" utilization.
Thread.Sleep is not very accurate. It will never guarantee times exactly to the millisecond so you will see some variations in your results
To answer your second question, about changing the utilization after a certain time, I suggest you run this method on one or more threads (depending on number of cores) and then when you want to change utilization you just stop those threads and spawn new ones with the new percentage values. That way, you don't have to implement thread communication to change percentage of a running thread.
Just in add of the Isak response, I let here a simple implementation for multicore:
public static void CPUKill(object cpuUsage)
{
Parallel.For(0, 1, new Action<int>((int i) =>
{
Stopwatch watch = new Stopwatch();
watch.Start();
while (true)
{
if (watch.ElapsedMilliseconds > (int)cpuUsage)
{
Thread.Sleep(100 - (int)cpuUsage);
watch.Reset();
watch.Start();
}
}
}));
}
static void Main(string[] args)
{
int cpuUsage = 50;
int time = 10000;
List<Thread> threads = new List<Thread>();
for (int i = 0; i < Environment.ProcessorCount; i++)
{
Thread t = new Thread(new ParameterizedThreadStart(CPUKill));
t.Start(cpuUsage);
threads.Add(t);
}
Thread.Sleep(time);
foreach (var t in threads)
{
t.Abort();
}
}
For a uniform stressing: Isak Savo's answer with a slight tweak. The problem is interesting. In reality there are workloads that far exceed it in terms of wattage used, thermal output, lane saturation, etc. and perhaps the use of a loop as the workload is poor and almost unrealistic.
int percentage = 80;
for (int i = 0; i < Environment.ProcessorCount; i++)
{
(new Thread(() =>
{
Stopwatch watch = new Stopwatch();
watch.Start();
while (true)
{
// Make the loop go on for "percentage" milliseconds then sleep the
// remaining percentage milliseconds. So 40% utilization means work 40ms and sleep 60ms
if (watch.ElapsedMilliseconds > percentage)
{
Thread.Sleep(100 - percentage);
watch.Reset();
watch.Start();
}
}
})).Start();
}
Each time you have to set cpuUsageIncreaseby variable.
for example:
1- Cpu % increase by > cpuUsageIncreaseby % for one minute.
2- Go down to 0% for 20 seconds.
3- Goto step 1.
private void test()
{
int cpuUsageIncreaseby = 10;
while (true)
{
for (int i = 0; i < 4; i++)
{
//Console.WriteLine("am running ");
//DateTime start = DateTime.Now;
int cpuUsage = cpuUsageIncreaseby;
int time = 60000; // duration for cpu must increase for process...
List<Thread> threads = new List<Thread>();
for (int j = 0; j < Environment.ProcessorCount; j++)
{
Thread t = new Thread(new ParameterizedThreadStart(CPUKill));
t.Start(cpuUsage);
threads.Add(t);
}
Thread.Sleep(time);
foreach (var t in threads)
{
t.Abort();
}
//DateTime end = DateTime.Now;
//TimeSpan span = end.Subtract(start);
//Console.WriteLine("Time Difference (seconds): " + span.Seconds);
//Console.WriteLine("10 sec wait... for another.");
cpuUsageIncreaseby = cpuUsageIncreaseby + 10;
System.Threading.Thread.Sleep(20000);
}
}
}

Runtime Error With Interrupt Timer on Atmega2560

I'm trying to make a loop execute regularly every 50 milliseconds on an Atmega 2560. Using a simple delay function won't work, because the total loop time ends up being the time it took to execute the other functions in the loop, plus your delay time. This works even less well if your functions calls take variable time, which they usually will.
To solve this, I implemented a simple timer class:
volatile unsigned long timer0_ms_tick;
timer::timer()
{
// Set timer0 registers
TCCR0A = 0b00000000; // Nothing here
TCCR0B = 0b00000000; // Timer stopped, begin function start by setting last three bits to 011 for prescaler of 64
TIMSK0 = 0b00000001; // Last bit to 1 to enable timer0 OFV interrupt enable
sei(); // Enable global interrupts
}
void timer::start()
{
timer0_ms_tick = 0;
// Set timer value for 1ms tick (2500000 ticks/sec)*(1 OFV/250 ticks) = 1000OVF/sec
// 256ticks - 250ticks - 6 ticks, but starting at 0 means setting to 5
TCNT0 = 5;
// Set prescaler and start timer
TCCR0B = 0b00000011;
}
unsigned long timer::now_ms()
{
return timer0_ms_tick;
}
ISR(TIMER0_OVF_vect)
{
timer0_ms_tick+=1;
TCNT0 = 5;
}
The main loop uses this like so:
unsigned long startTime, now;
while(true)
{
startTime = startup_timer.now_ms();
/* Loop Functions */
// Wait time step
now = startup_timer.now_ms();
while(now-startTime < 50)
{
now = startup_timer.now_ms();
}
Serial0.print(ltoa(now,time_string, 10));
Serial0.writeChar('-');
Serial0.print(ltoa(startTime,time_string, 10));
Serial0.writeChar('=');
Serial0.println(ltoa(now-startTime,time_string, 10));
}
My output looks like this:
11600-11550=50
11652-11602=50
11704-11654=50
11756-11706=50
12031-11758=273
11828-11778=50
11880-11830=50
11932-11882=50
11984-11934=50
12036-11986=50
12088-12038=50
12140-12090=50
12192-12142=50
12244-12194=50
12296-12246=50
12348-12298=50
12400-12350=50
12452-12402=50
12504-12454=50
12556-12506=50
12608-12558=50
12660-12610=50
12712-12662=50
12764-12714=50
12816-12766=50
12868-12818=50
12920-12870=50
12972-12922=50
13024-12974=50
13076-13026=50
13128-13078=50
13180-13130=50
13232-13182=50
13284-13234=50
13336-13286=50
13388-13338=50
13440-13390=50
13492-13442=50
13544-13494=50
13823-13546=277
13620-13570=50
It seems to work well most of the time, but every once in a while something odd will happen with the timing values. I think it has something to do with the interrupt, but I'm not sure what. Any help would be greatly appreciated.

How can i change background color in a while loop - processing

I'm new to processing and trying to make a very simple program where i have an arduino that produces a seriel input (according to an analogue read value). The idea is a Processing window will open with a block color shown for 30 seconds. In this time all the readings from the arduino will be summed and averaged - creating an average for that color.
After 30 seconds the colour will change and a new average (for the next color) will start being calculated. This is the code i have started to write (for now focusing on just one 30 second period of green).
I realise there are likely problems with the reading/summing and averaging (i havent researched these yet so i'll put that to one side) - but my main question is why isn't the background green? When i run this program i expect the background to be green for 30 seconds - where as what happens is it is white for 30 seconds then changes to green. Can't figure out why! Thanks for any help!
import processing.serial.*;
Serial myPort;
float gsrAverage;
float greenAverage;
int gsrValue;
int greenTotal = 0;
int greenCount = 1;
int timeSinceStart = 0;
int timeAtStart;
int count=0;
color green = color(118,236,0);
void setup () {
size(900, 450);
// List all the available serial ports
//println(Serial.list());
myPort = new Serial(this, Serial.list()[0], 9600);
}
void draw () {
while (timeSinceStart < 30000) {
background(green);
greenTotal = greenTotal + gsrValue;
greenCount = greenCount + 1;
delay(500);
timeSinceStart = millis()-timeAtStart;
//println(timeSinceStart); for de bugging
}
greenAverage = greenTotal/greenCount;
//println(greenAverage); for de bugging
}
void serialEvent (Serial myPort) {
int inByte=myPort.read();
//0-255
gsrValue=inByte;
}
What I like to do for timers, is use IF statements and use millis() or a constantly updated variable 'm' right inside the condition:
int timeSinceStart;
int m;
void setup(){
timeSinceStart = millis(); // initialize here so it only happens once
}
void draw(){
m = millis(); // constantly update the variable
if(timeSinceStart + 30000 < m){
greenAverage = greenTotal/greenCount; // or whatever is outside while loop
timeSinceStart = millis();
}
//Anything that went inside the while loop can go here, or above the IF
}
This makes it so around every 30 seconds, the background will change once, and you just re-update the timeSinceStart variable in there too. This way, it will only update when you want it to update and not constantly update and break the code.
I tend not to use while loops in processing as they usually cause headaches. Hope my example helps.
May have found a way round this using an IF statement. I perhaps looked over the fact the draw function is itself a loop, so i was able to use a variation of
if (timeSinceStart < 5000) {
background(green);
}
within draw.
When dealing with timed events in Processing you should not use while loops inside the draw() function. The draw() function itself is a while loop which updates the "screen" each frame.
So, what you should do is create a timer and let it do a switch for you inside the draw() function. In your case, if you want to start with a green screen, you do that in the setup() function, and then create a method for altering according to a timer in your draw() function.
This is a suggestion on how you could solve your particular problem. Just change the cycle variable according to your need. In your case it would be 30000.
boolean isGreen = true;
int startTime = 0;
int lastTime = 0;
int cycle = 1000; //the cycle you need
void setup() {
size(200, 200);
background(0, 255, 0); //green
}
void draw() {
startTime = millis();
if (startTime > lastTime + cycle) {
if (isGreen) {
background(255); //white
isGreen = !isGreen;
} else {
background(0, 255, 0); //green
isGreen = !isGreen;
}
lastTime = millis();
}
}