Parent process won't write correctly to lua child process stdin - process

I have a dart program, called file.dart, like so:
import 'dart:io';
import 'dart:convert';
main() {
final file = Directory.current.path + '/file.lua';
Process.start('lua', [file]).then((Process process) {
print('opened process');
process.stdout.pipe(stdout);
process.stdin.add([4]);
process.stdin.flush().then((blah) => print('flushed'));
});
}
I have a lua program, called file.lua, like so:
print('starting to read')
local data = io.stdin:read()
print('i read it ', data)
when i run the dart program this is what happens:
$ dart file.dart
opened process
flushed
starting to read
then it just sits there forever. the lua read is blocking and it never picks up the bytes written by the dart process.
I put a delay on the dart process so that it wrote and flushed a second later. the output of running that was
$ dart file.dart
opened process
starting to read
flushed
but it still hung up forever.
So i have 2 questions:
1) Why won't the lua process pick up the byte that the dart process wrote?
2) Is there a super easy way to make the lua read non-blocking? So that lua would poll instead of block.
Please put the number of the question that you are answering as you answer it. Thanks!

I solved this issue by sending a '\n\ or a new-line byte at then end of my transmissions!

Related

Chronicle Queue despite rolling cycle minutely deleting chronicle file after processing keeps file in open list lsof and not releasing memory

I am using chronicle queue version 5.20.123 and open JDK 11 with Linux Ubuntu 20.04, when we recycle current cycle on minute rolling I am listening on StoreFileListener onReleased I am deleting file then also file remains open without releasing memory nor file gets deleted..
Please guide what needs to be done in order to make it work.
Store FileListener Implemented like this:
storeFileListener = new StoreFileListener() {
#Override
public void onReleased(int cycle, File file) {
file.delete();
}
}
Creation of chronicle Queue as follows:
eventStore = SingleChronicleQueueBuilder.binary(GlobalConstants.CURRENT_DIR
+ GlobalConstants.PATH_SEPARATOR + EventBusConstants.EVENT_DIR
+ GlobalConstants.PATH_SEPARATOR + eventType)
.rollCycle(RollCycles.MINUTELY)
.storeFileListener(storeFileListener).build();
tailer = eventStore.createTailer();
appender = eventStore.acquireAppender();
previousCycle = tailer.cycle();
Recycling of previous Cycle when processing completes:
var store = eventStore.storeForCycle(previousCycle,0,false,null);
eventStore.closeStore(store);
Chronicle Queue Deleted Files lsof :
Manually getting hold of store and trying to close it will do nothing but interfere with reference counting - you increase and then decrease number of references.
Chronicle Queue will automatically release resources for given store after all appenders and tailers using that store are done with it. In your case it's unclear what you do with your tailer, but if it already reads from the new file - the old one will be released, and resources associated with it - although this is done in the background and might not happen immediately.
PS file.delete() returns boolean and it's always a good idea to check the return value to see if the delete was successful (in your case it can be seen it was but still it's considered a good practice)

debugging vxworks loadModule failure

I have a VxWorks Image Project project without a File-System on MPC5200B, using DIAB tool-chain.
I need to dynamically load a module from flash.
I allocated memory on my stack char myTemporaryModuleData[MAX_MODULE_SIZE]
and filled it with the module data from Flash.
(checked that the binary data is intact)
then i create a memDevice('/mem/mem01', myTemporaryModuleData, moduleReadLength)
open the psuedo-stream int fdModuleData = open("/mem/mem01", O_RDONLY, 777);
when i run int mId = loadModule(fdModuleData, LOAD_ALL_SYMBOLS);
did not see anything in the console after running loadModule();
but mId = 0 which indicates failure :(.
getErrno() returned 0x3D0004 (S_objLib_OBJ_TIMEOUT)
NOTE: it didn't take long at all to fail => timeout?
i tried replacing the module with a simple void foo() { printf(...); } module but still failes with same issue.
tried loading an .out instead of .o
unfortunately, nothing got me nowhere,
How can i know what caused it to fail? (log, last_error, anything i should check?)
FOUND IT.
Apparently, it was a mistake in the data read from the flash.
What I can contribute is that 'loadModule()' from memDrv device is possible and working.

Why don't all the shell processes in my promises (start blocks) run? (Is this a bug?)

I want to run multiple shell processes, but when I try to run more than 63, they hang. When I reduce max_threads in the thread pool to n, it hangs after running the nth shell command.
As you can see in the code below, the problem is not in start blocks per se, but in start blocks that contain the shell command:
#!/bin/env perl6
my $*SCHEDULER = ThreadPoolScheduler.new( max_threads => 2 );
my #processes;
# The Promises generated by this loop work as expected when awaited
for #*ARGS -> $item {
#processes.append(
start { say "Planning on processing $item" }
);
}
# The nth Promise generated by the following loop hangs when awaited (where n = max_thread)
for #*ARGS -> $item {
#processes.append(
start { shell "echo 'processing $item'" }
);
}
await(#processes);
Running ./process_items foo bar baz gives the following output, hanging after processing bar, which is just after the nth (here 2nd) thread has run using shell:
Planning on processing foo
Planning on processing bar
Planning on processing baz
processing foo
processing bar
What am I doing wrong? Or is this a bug?
Perl 6 distributions tested on CentOS 7:
Rakudo Star 2018.06
Rakudo Star 2018.10
Rakudo Star 2019.03-RC2
Rakudo Star 2019.03
With Rakudo Star 2019.03-RC2, use v6.c versus use v6.d did not make any difference.
The shell and run subs use Proc, which is implemented in terms of Proc::Async. This uses the thread pool internally. By filling up the pool with blocking calls to shell, the thread pool becomes exhausted, and so cannot process events, resulting in the hang.
It would be far better to use Proc::Async directly for this task. The approach with using shell and a load of real threads won't scale well; every OS thread has memory overhead, GC overhead, and so forth. Since spawning a bunch of child processes is not CPU-bound, this is rather wasteful; in reality, just one or two real threads are needed. So, in this case, perhaps the implementation pushing back on you when doing something inefficient isn't the worst thing.
I notice that one of the reasons for using shell and the thread pool is to try and limit the number of concurrent processes. But this isn't a very reliable way to do it; just because the current thread pool implementation sets a default maximum of 64 threads does not mean it always will do so.
Here's an example of a parallel test runner that runs up to 4 processes at once, collects their output, and envelopes it. It's a little more than you perhaps need, but it nicely illustrates the shape of the overall solution:
my $degree = 4;
my #tests = dir('t').grep(/\.t$/);
react {
sub run-one {
my $test = #tests.shift // return;
my $proc = Proc::Async.new('perl6', '-Ilib', $test);
my #output = "FILE: $test";
whenever $proc.stdout.lines {
push #output, "OUT: $_";
}
whenever $proc.stderr.lines {
push #output, "ERR: $_";
}
my $finished = $proc.start;
whenever $finished {
push #output, "EXIT: {.exitcode}";
say #output.join("\n");
run-one();
}
}
run-one for 1..$degree;
}
The key thing here is the call to run-one when a process ends, which means that you always replace an exited process with a new one, maintaining - so long as there are things to do - up to 4 processes running at a time. The react block naturally ends when all processes have completed, due to the fact that the number of events subscribed to drops to zero.

How redis pipe-lining works in pyredis?

I am trying to understand, how pipe lining in redis works? According to one blog I read, For this code
Pipeline pipeline = jedis.pipelined();
long start = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
pipeline.set("" + i, "" + i);
}
List<Object> results = pipeline.execute();
Every call to pipeline.set() effectively sends the SET command to Redis (you can easily see this by setting a breakpoint inside the loop and querying Redis with redis-cli). The call to pipeline.execute() is when the reading of all the pending responses happens.
So basically, when we use pipe-lining, when we execute any command like set above, the command gets executed on the server but we don't collect the response until we executed, pipeline.execute().
However, according to the documentation of pyredis,
Pipelines are a subclass of the base Redis class that provide support for buffering multiple commands to the server in a single request.
I think, this implies that, we use pipelining, all the commands are buffered and are sent to the server, when we execute pipe.execute(), so this behaviour is different from the behaviour described above.
Could someone please tell me what is the right behaviour when using pyreids?
This is not just a redis-py thing. In Redis, pipelining always means buffering a set of commands and then sending them to the server all at once. The main point of pipelining is to avoid extraneous network back-and-forths-- frequently the bottleneck when running commands against Redis. If each command were sent to Redis before the pipeline was run, this would not be the case.
You can test this in practice. Open up python and:
import redis
r = redis.Redis()
p = r.pipeline()
p.set('blah', 'foo') # this buffers the command. it is not yet run.
r.get('blah') # pipeline hasn't been run, so this returns nothing.
p.execute()
r.get('blah') # now that we've run the pipeline, this returns "foo".
I did run the test that you described from the blog, and I could not reproduce the behaviour.
Setting breakpoints in the for loop, and running
redis-cli info | grep keys
does not show the size increasing after every set command.
Speaking of which, the code you pasted seems to be Java using Jedis (which I also used).
And in the test I ran, and according to the documentation, there is no method execute() in jedis but an exec() and sync() one.
I did see the values being set in redis after the sync() command.
Besides, this question seems to go with the pyredis documentation.
Finally, the redis documentation itself focuses on networking optimization (Quoting the example)
This time we are not paying the cost of RTT for every call, but just one time for the three commands.
P.S. Could you get the link to the blog you read?

How to flush the io buffer in Erlang?

How do you flush the io buffer in Erlang?
For instance:
> io:format("hello"),
> io:format(user, "hello").
This post seems to indicate that there is no clean solution.
Is there a better solution than in that post?
Sadly other than properly implementing a flush "command" in the io/kernel subsystems and making sure that the low level drivers that implement the actual io support such a command you really have to simply rely on the system quiescing before closing. A failing I think.
Have a look at io.erl/io_lib.erl in stdlib and file_io_server.erl/prim_file.erl in kernel for the gory details.
As an example, in file_io_server (which effectively takes the request from io/io_lib and routes it to the correct driver), the command types are:
{put_chars,Chars}
{get_until,...}
{get_chars,...}
{get_line,...}
{setopts, ...}
(i.e. no flush)!
As an alternative you could of course always close your output (which would force a flush) after every write. A logging module I have does something like this every time and it doesn't appear to be that slow (it's a gen_server with the logging received via cast messages):
case file:open(LogFile, [append]) of
{ok, IODevice} ->
io:fwrite(IODevice, "~n~2..0B ~2..0B ~4..0B, ~2..0B:~2..0B:~2..0B: ~-8s : ~-20s : ~12w : ",
[Day, Month, Year, Hour, Minute, Second, Priority, Module, Pid]),
io:fwrite(IODevice, Msg, Params),
io:fwrite(IODevice, "~c", [13]),
file:close(IODevice);
io:put_chars(<<>>)
at the end of the script works for me.
you could run
flush().
from the shell, or try
flush()->
receive
_ -> flush()
after 0 -> ok
end.
That works more or less like a C flush.