I was wondering if it is correct to use QThread to launch a new QProcess or is it best to use QProcess::startDetached(). Reading standard output is essential and starting a detached process does not let you read the stdout even if the readyRead signals are connected. Instead I was thinking about starting a new QThread that would then run my QProcess and that seems like a better idea when it comes to signals and slots. What's the best approach when it comes to reading the output and writing to QProcess that doesn't block the UI?
Related
I'm using a Pharo 6.1 image to do some CSV file processing. Sometimes my image just becomes unresponsive — I have to terminate the Pharo VM from Task Manager.
Is it necessary to use somethings like:
ensure:[toStream close]
ensure:[fromStream close]
What's a simple but reliable approach to reading and especially writing files with Pharo?
The image is not really unresponsive but the UI thread for sure is when you do a long operation like reading a big CSV file.
The best would be to for a process for doing it.
Like in:
[ target doSomeLongThing ] fork.
You can view the process in the process browser from the world menu and terminate it there.
Now that is not really something one will think about when trying things out interactively in a playground.
What I do to alleviate this situation is twofold:
Put these things in a test. Why? Because tests have a maximum duration and if things are stuck, they will come back with a time exceeded notification from the test.
Start the Zinc REPL on a port, so that I can access the system from a web browser if the UI thread is kind of stuck and restart the UI thread.
I wish the interrupt key would work in all cases but well, it doesn't and is a solid annoyance.
I am of the opinion that images should be considered as discardable artifacts and CI job should be building them regularly (like daily) so that we have some recovery path.
This is not really cool as yes, using an image and being able to come back to it without having to rebuild stuff all the time when doing explorations shouldn't frustrate us with UI thread blockages. I hate that. I have an image that is stuck on restart from some Glamour problem, it is infuriating as it cannot be debugged or anything.
You can also use:
(FileLocation imageDirectory / 'somefile.csv') asFileReference readStreamDo: [ :stream |
"do something with the stream" ].
This will do the ensure: bit for you. Cleaner. Easier.
For CSV, also give a shot to NeoCSV as it will do the file handling for you as well.
HTH
Basically, how can I make sure that in my module, a specific process is current. I've looked at kick_process, but I'm not sure how to have my module execute in the context of that process once kicking it into kernel mode.
I found this related question, but it has no replies. I believe an answer to my question could help that asker as well.
Note: I am aware that if I want the task_struct of a process, I can look it up. I'm interested in running in a specific context since I want to call functions that reference current.
Best way i have found to do anything in the context of a particular process in the kernel, is to sleep in process context(wait_* family of functions) and wake up that thread and do whatever needs to be done in that context. This would ofcourse mean you would have to have the application call into the kernel via IOCTL or something and sleep on that thread and wake it up whenever you need to do something. This seems to be a very widely used and popular mechanism.
Suppose I am running a Sikuli program and I want to pause the program at a particular point and then after sometime I want to resume the program from that point where I paused, without affecting the process. And then I want to stop the process and exit from it. The point where I stopped till that it should be saved. Is it possible in Sikuli? If yes, then how?
Press Alt+Shift+c to kill a running Sikuli script.
No, Sikuli has no built-in capability to manage this for you. However, you can write all of these capabilities into your script or otherwise get them.
Pausing an resuming is most easily done on the Unix command-line, where you can use control-z to suspend a program and fg to resume it. Windows has similar capabilities. Look for "suspend and resume process " to find some ways of doing this (there are many).
Exiting from a program and then being able to re-start the program and have it resume (roughly) where it left off is called "checkpointing". The checkpointing packages I know of are intended for distributed computing and would probably be overkill for what you're doing, but you could take a look at the Wikipedia entry for suggestions. I suspect that implementing it yourself will be the easiest way to go.
For help with either of these topics, I recommend starting a new question specifying the language you're using (Jython or Java) and the operating system (Unix or Windows). The questions and answers to these aren't related to Sikuli.
For pause, you can use wait commands; if you want to resume, you need to have flags that you set at the beginning of the script, and change accordingly to what you want to wait for.
For closing the script; you can use the Type command wherever you want the script to quit; which is the equivalent of pressing CMD-Shift-C when using the IDE
type('c', KeyModifier.CMD + KeyModifier.SHIFT)
Hope this helps
I want to copy a file to a FTP server using wxFTP, but I would like to do this without blocking the UI, and much better, while displaying a progress bar. Can I do this without extra thread?
I'm using wxLua, but I can adapt a solution written in any language as long as it uses a wxWidgets binding.
Try using wx.lib.delayedresult. It's available in wxPython, but maybe also it is in your wxWidgets library too. It creates separate worker thread and is called with a consumer function that is called once worker thread finishes his job. Quite useful thing.
See wxPython docs for details.
What's wrong with starting your own Thread for this?
You could check the streams canRead() method periodically (through a timer or in the event loop, maybe) and only read when it returns true, but it'll probably be a lot more complex than just starting a separate thread.
Whether this is possible I don't know, but it would mighty useful!
I have a process that fails periodically (running in Windows 2000). I then have just one chance to react to it before having to restart it and painfully wait for it to fail again. I didn't write the process so don't have the source to debug. The failure is seemingly random.
With a snapshot of the process I could repeatedly and quickly test reactions to the failure.
I had thought of running inside a VM but this isn't possible in this instance.
EDIT:
#Jon Cage asked:
When you say a snapshot, you mean capturing a process when it's about to fail (including memory, program state etc. etc.) ...and then replaying it's final few seconds repeatedly to see what effect it has on some other component?
This is exactly what I mean!
I think minidump is what you are looking for.
You can also used Userdump:
The User Mode Process Dumper
(userdump) dumps any running Win32
processes memory image (including
system processes such as csrss.exe,
winlogon.exe, services.exe, etc) on
the fly, without attaching a debugger,
or terminating target processes.
Generated dump file can be analyzed or
debugged by using the standard
debugging tools.
This article shows you how to use it.
My best bet is to start the process in a debugger (OllyDbg being my preferred tool).
The process will pause on an exception, and you can try to figure out what happened shortly before that.
This needs some understanding of assembler and does not allow to create a snapshot of the process for later analysis. You would need to write your own debugger for that - it should be theoretically possible.