Behaviour of feed operator pipeline - raku

I'm experimenting with the feed operator using the following code:
my $limit=10000000;
my $list=(0,1...4);
sub task($_,$name) {
say "Start $name on $*THREAD";
loop ( my $i=0; $i < $limit; $i++ ){};
say "End $name";
Nil;
}
sub stage( &code, *#elements --> Seq) {
map(&code, #elements);
}
sub feeds() {
my #results;
$list
==> map ({ task($_, "stage 1"); say( now - ENTER now );$_+1})
==> map ({ task($_, "stage 2"); say( now - ENTER now );$_+1})
==> stage ({ task($_, "stage 3"); say( now - ENTER now );$_+1})
==> #results;
say #results;
}
feeds();
the task sub is just a loop to burn cpu cycles and displays the thread used
the stage sub is a wrapper around the map sub
each step of the feed pipeline maps to calling the cpu intensive task and timing it. The result of the map is the input +1
The output when run is:
Start stage 1 on Thread<1>(Initial thread)
End stage 1
0.7286811
Start stage 2 on Thread<1>(Initial thread)
End stage 2
0.59053989
Start stage 1 on Thread<1>(Initial thread)
End stage 1
0.5955893
Start stage 2 on Thread<1>(Initial thread)
End stage 2
0.59050998
Start stage 1 on Thread<1>(Initial thread)
End stage 1
0.59472201
Start stage 2 on Thread<1>(Initial thread)
End stage 2
0.5968531
Start stage 1 on Thread<1>(Initial thread)
End stage 1
0.5917188
Start stage 2 on Thread<1>(Initial thread)
End stage 2
0.587358
Start stage 1 on Thread<1>(Initial thread)
End stage 1
0.58689858
Start stage 2 on Thread<1>(Initial thread)
End stage 2
0.59177099
Start stage 3 on Thread<1>(Initial thread)
End stage 3
3.8549498
Start stage 3 on Thread<1>(Initial thread)
End stage 3
3.8560015
Start stage 3 on Thread<1>(Initial thread)
End stage 3
3.77634317
Start stage 3 on Thread<1>(Initial thread)
End stage 3
3.6754558
Start stage 3 on Thread<1>(Initial thread)
End stage 3
3.672909
[3 4 5 6 7]
The result in #result is correct (The input from $list in incremented 3 three times)
The output of the first two stages pulls/alternates but the third stage doesn't execute until the completion of all the input into stage 2
Is there an issue with my wrapper to the map sub causing this behaviour?
Also the time it takes to evaluate in the wrapper is significantly longer than a direct call to map.
Any help is appreciated.

The slurpy parameter is waiting to consume the entire sequence being passed to it. Consider the difference between:
perl6 -e 'sub foo($) { }; my $items := gather for 1..Inf { say $_ }; foo($items);'
perl6 -e 'sub foo(*#) { }; my $items := gather for 1..Inf { say $_ }; foo($items);'
The first example will finish, the second one will never finish. Thus you would want to change your stage function to:
sub stage( &code, $elements --> Seq) {
map(&code, $elements);
}

Related

Elixir, how running multiple processes under the supervision of a single Supervisor

There is a program of three modules. The Print module receives a number from the keyboard, passes it to another module, receives the response, and displays it on the screen. The Proc1 and Proc2 modules receive a number, perform calculations, and send the result back.
defmodule Launch do
#moduledoc """
Documentation for `Launch`.
"""
#doc """
"""
def start() do
children = [
%{
id: Print,
start: {Print, :print, []}
},
%{
id: Proc1,
start: {Proc1, :proc1, []}
},
%{
id: Proc2,
start: {Proc2, :proc2, []}
}
]
Supervisor.start_link(children, strategy: :one_for_one)
end
end
defmodule Print do
def print() do
num =
IO.gets("Input number: ")
|> String.trim()
|> String.to_integer()
if num >= 0 do
send(Proc1, {self(), num})
else
send(Proc2, {self(), num})
end
receive do
num -> IO.puts(num)
after
500 ->
print()
end
print()
end
end
defmodule Proc1 do
def proc1() do
receive do
{pid, num} ->
send(pid, 100/num)
proc1()
_e ->
IO.puts("Error")
end
end
end
defmodule Proc2 do
def proc2() do
receive do
{pid, num} ->
send(pid, 1000/num)
proc2()
_e ->
IO.puts("Error")
end
end
end
I am trying to run all processes under the supervision of a single Supervisor. But there is a problem-only the first "child" is started, the other "children" are not started. In the example above, the Print process will start, but Proc1 and Proc2 will not start. How do I run all processes under one Supervisor? Important note: the Print process must get the addresses of the Proc1 and Proc2 processes for communication.
There are many issues with the code you’ve posted.
Registered processes
To be able to use process name as Process.dest() in a call to Kernel.send/2, one should start the named process.
Supervisor.start_link/2
Supervisor.start_link/2 expects a list of tuples, with modules and functions that immediately return, having the process started as a side effect. These functions are called, and there would not be any magic: if this is an infinitely recursive function, the execution flow would be deadlocked inside, waiting for the message in receive/1.
Supervisor performs some magic by automatically monitoring and restarting children for you, but it does nothing to spawn the separate processes. GenServer encapsulates this functionality and provides a handy way to not bother about spawning processes.
Solution
What you might do, is to spawn all three processes, manually monitor them, and react on {:DOWN, ref, :process, pid, reason} message respawning the died process. This is exactly what Supervisor effectively does under the hood for children.
Launch
defmodule Launch do
def start() do
proc1 = spawn(&Proc1.proc1/0)
proc2 = spawn(&Proc2.proc2/0)
print = spawn(fn -> Print.print(proc1, proc2) end)
Process.monitor(proc1)
Process.monitor(proc2)
Process.monitor(print)
receive do
msg -> IO.inspect(msg)
end
end
end
Print
defmodule Print do
def print(pid1, pid2) do
num =
IO.gets("Input number: ")
|> String.trim()
|> String.to_integer()
if num >= 0 do
send(pid1, {self(), num})
else
send(pid2, {self(), num})
end
receive do
num -> IO.puts(num)
end
print(pid1, pid2)
end
end
The other two modules are fine.
Here is how it will look like in iex
iex|1 ▶ c "/tmp/test.ex"
#⇒ [Launch, Print, Proc1, Proc2]
iex|2 ▶ Launch.start
Input number: 10
10.0
Input number: 1000
0.1
Input number: a
#⇒ {:DOWN, #Reference<0.3632020665.3980394506.95298>,
# :process, #PID<0.137.0>,
# {:badarg,
# [
# {:erlang, :binary_to_integer, ["a"], []},
# {Print, :print, 2, [file: '/tmp/test.ex', line: 22]}
# ]}}
Now instead of printing this out, respawn the failed process, and you will get a bare implementation of the supervised intercommunicating processes. For all_for_one strategy that could be achieved with:
receive do
{:DOWN, _, _, _, _} ->
Process.exit(print, :normal)
Process.exit(proc1, :normal)
Process.exit(proc2, :normal)
start()
end

Jenkins - How to send boolean parameter using curl, http?

I am trying to send boolean parameter using curl, http-get to Jenkins without success. Need someone to tell me what is wrong here.
I did try to send it as true , True and 1 but none of this works.
Job in jenkins is set as pipeline and is parametrized. There are 3 parameters added from GUI:
bp1 boolean 1
bp2 boolean 2
sp string (this one is to check if parameters are send at all
Default values are as below:
bp1 - false
bp2 - false
sp - null
Pipeline code:
stage ('bools') {
echo 'bool 1 is:' + params.bp1
echo 'bool 2 is:' + params.bp2
echo 'string is:' + params.sp
}
Command used to invoke build (in browser or postman):
http://X.X.X.X/jenkins/job/bool_debug/buildWithParameters?token=booltest&bp1=true&bp2=false$sp='this is text from param'
Expected result is bool 1 is:true but got bool 1 is:false. Jenkins did not change (tick checkbox) boolean parameter when invoking from API. In other means:
What I get:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] stage
[Pipeline] { (bools)
[Pipeline] echo
bool 1 is:false
[Pipeline] echo
bool 2 is:false
[Pipeline] echo
string is:'this is text from param'
[Pipeline] }
[Pipeline] // stage
[Pipeline] End of Pipeline
Finished: SUCCESS
What it should be:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] stage
[Pipeline] { (bools)
[Pipeline] echo
bool 1 is:true <--------------------------
[Pipeline] echo
bool 2 is:false
[Pipeline] echo
string is:'this is text from param'
[Pipeline] }
[Pipeline] // stage
[Pipeline] End of Pipeline
Finished: SUCCESS
Faced with the same issue, think it's a Jenkins bug. I ended up creating another pipeline which triggers the desired pipeline with params as a workaround.

Bunny and RabbitMQ - Adapting the WorkQueue tutorial to cancel subscribing when the Queue is completely worked

I fill up my queue, check it has the right number of tasks to work and then have workers in parallell set to prefetch(1) to ensure each just takes one task at a time.
I want each worker to work its task, send a manual acknowledgement, and keep working taking from the queue if there is more work.
If there is not more work, i.e. the queue is empty, I want the worker script to finish up and return(0).
So, this is what I have now:
require 'bunny'
connection = Bunny.new("amqp://my_conn")
connection.start
channel = connection.create_channel
queue = channel.queue('my_queue_name')
channel.prefetch(1)
puts ' [*] Waiting for messages.'
begin
payload = 'init'
until queue.message_count == 0
puts "worker working queue length is #{queue.message_count}"
_delivery_info, _properties, payload = queue.pop
unless payload.nil?
puts " [x] Received #{payload}"
raise "payload invalid" unless payload[/cucumber/]
begin
do_stuff(payload)
rescue => e
puts "Error running #{payload}: #{e.backtrace.join('\n')}"
#failing stuff
end
end
puts " [x] Done with #{payload}"
end
puts "done with queue"
connection.close
exit(0)
ensure
connection.close
end
I want to still make sure I am done when the queue is empty. This is the example from the RabbitMQ site... https://www.rabbitmq.com/tutorials/tutorial-two-ruby.html . It has a number of things we want for our work queue, most importantly manual acknowledgements. But it does not stop running and I need that to happen programmatically when the queue is done:
#!/usr/bin/env ruby
require 'bunny'
connection = Bunny.new(automatically_recover: false)
connection.start
channel = connection.create_channel
queue = channel.queue('task_queue', durable: true)
channel.prefetch(1)
puts ' [*] Waiting for messages. To exit press CTRL+C'
begin
queue.subscribe(manual_ack: true, block: true) do |delivery_info, _properties, body|
puts " [x] Received '#{body}'"
# imitate some work
sleep body.count('.').to_i
puts ' [x] Done'
channel.ack(delivery_info.delivery_tag)
end
rescue Interrupt => _
connection.close
end
How can this script be adapted to exit out when the queue has been completely worked (0 total and 0 unacked)?
From what I understand, you want your subscriber to end if there are no pending messages in the RabbitMQ queue.
Given your second script, you could avoid passing block: true, and that will return nothing when there's no more data to process. In that case, you could exit the program.
You can see that in the documentation: http://rubybunny.info/articles/queues.html#blocking_or_nonblocking_behavior
By default it's non-blocking.

Objective-C - How to concatenate /add text to a label

I have label control on a storyboard. (maybe should use a text control)?
self.labelMsg = "Begin.."
I'm running a process with 5 steps. How do I add the output to the label
(another string message) to show the status of the process?
So the label text looks like:
"Begin..
Step 1 Complete...
Step 2 Complete...
Step 3 Complete...
Step 4 Complete...
Step 5 Complete...
Done !!! You Rock!"
How do you concat / add to an existing string - in Objective-C?
You can use stringByAppendingString:
self.labelMsg = #"Begin...";
// After step 1 completes
self.labelMsg = [self.labelMsg stringByAppendingString: #"\nStep 1 Complete..."];
// After step 2 completes
self.labelMsg = [self.labelMsg stringByAppendingString: #"\nStep 2 Complete..."]
// etc...

Running multiple exec commands and waiting to finish before continuing

I know I ask a lot of questions and I know there is a lot on here about this exactly what I am trying to do but I have not been able to get it to work in my script nor figure out why it does not let me do this. I am trying to run several commands using exec in the background and the tests can range anywhere between 5 and 45 minutes (longer if they have to cue for a license). It takes forever to run them back to back so I was wondering what I need to do to make my script wait for them to finish before moving on the the next section of script.
while {$cnt <= $len} {
# Begin count for running tests
set testvar [lindex $f $cnt]
if {[file exists $path0/$testvar] == 1} {
cd $testvar
} else {
exec mkdir $testvar
cd $testvar
exec create_symobic_link_here
}
# Set up test environment
exec -ignorestderr make clean
exec -ignorestderr make depends
puts "Running $testvar"
set runtest [eval exec -ignorestderr bsub -I -q lin_i make $testvar SEED=1 VPDDUMP=on |tail -n 1 >> $path0/runtestfile &]
cd ../
incr cnt
}
I know there is nothing here to make the script wait for the process to finish but I have tried many different things any this is the only way I can get it to run everything. It just doesn't wait.
One way is to modify your tests to create a "finished" file. This file should be created whether the test completes correctly or fails.
Modify the startup loop to remove this file before starting the test:
catch { file delete $path0/$testvar/finished }
Then create a second loop:
while { true } {
after 60000
set cnt 1 ; # ?
set result 0
while { $cnt <= $len } {
set testvar [lindex $f $cnt]
if { [file exists $path0/$testvar/finished] } {
incr result
}
incr cnt
}
if { $result == $len } {
break
}
}
This loop as written will never exit if any one test doesn't create the 'finished' file. So I would add in an additional stop condition (no more than one hour) to exit the loop.
Another way would be to save the process ids of the background processes in a list and then the second loop would check each process id to see if it is still running. This method would not require any modifications to the test, but is a little harder to implement (not too hard on unix/mac, harder on windows).
Edit: loop using process id check:
To use process ids, the main loop needs to be modified to save the process ids of the background jobs:
Before the main loop starts, clear the process id list:
set pidlist {}
In the main loop, save the process ids from the exec command (In tcl, [exec ... &] returns the background process id):
lappend pidlist $runtest ; # goes after the exec bsub...
A procedure to check for the existence of a process (for unix/mac). Tcl/Tk does not have any process control commands, so the unix 'kill' command is used. 'kill -0' on unix only checks for process existence, and does not affect the execution of the process.
# return 0 if the process does not exist, 1 if it does
proc checkpid { ppid } {
set pexists [catch {exec kill -0 $ppid}]
return [expr {1-$pexists}]
}
And the second loop to check to see if the tests are done becomes:
set tottime 0
while { true } {
after 60000
incr tottime 1 ; # in minutes
set result 0
foreach {pid} $pidlist {
if { ! [checkpid $pid] } {
incr result
}
}
if { $result == $len } {
break
}
if { $tottime > 120 } {
puts "Total test time exceeded."
break ; # or exit
}
}
If a test process gets hung and never exits, this loop will never exit, so a second stop condition on total time is used.