Permission error appears when continuously taking screenshots and opening the files - error-handling

I am trying to use Python to write an automation bot and need to continuously take screenshots of a certain area.
from PIL import ImageGrab
import time
def screenGrab():
box = (1, 1, 2174, 1248)
im = ImageGrab.grab(box)
im.save("current_screenshot.png", 'PNG')
# time.sleep(2)
Then in another module I will continuously call this screenGrab() function and use PIL to analyze the screenshot, until certain conditions are triggered and corresponding actions are taken. Then the loop of "taking screenshots - analyzing - waiting for a certain condition to be satisfied - take actions" starts again.
It can work pretty well for a while (sometimes a whole day, sometimes only three minutes, depending on how lucky I am), but when I am unlucky, I see the following error:
PermissionError: [Errno 13] Permission denied: 'current_screenshot.png'
Why does this happen? What should I do to prevent it?
I have done some research online and think it may be that the file hasn't been properly closed when I call another module to open this image and analyze it. However, I am still confused why it should happen at all. Shouldn't Python completely finish every command before it goes to the next? Before it runs whatever command to open the current_screenshot.png, it must have finished the im.save("current_screenshot.png", 'PNG') command, which should have properly closed the file, right? Then how can this error pop up?
By the way, it does happen less frequently when I un-comment the time.sleep(2) command to give my laptop more time to save the image, but doing this significantly reduces the efficiency of my codes.

Related

Error 3 after OPEN DATASET if big data volume is processed, none otherwise

The problem is that I received a ticket from the AMS support team, which I cannot debug because for given input parameters on the selection screen, the program is looping for 10 hours and that's why the program is set as a background job.
The point of the program is that it should save some data in xls file on the application server.
The important thing is that for some input parameters on the selection screen program WORKS (smaller date intervals, also fewer data to work with), but right now I have to explain to the consultant why the program cannot write that much data into the file on the application server.
To conclude, a Background job is linked to the program which is grabbing a lot of data from DB, in some cases when there is an enormous amount of the data, the program cannot open the file for output so there is no data in xls.
My question is, how big the limit for OUTPUT mode in OPEN DATASET is and why I get an "error opening file" when I have bigger intervals in the selection screen.
OPEN DATASET lv_file FOR OUTPUT IN TEXT MODE ENCODING NON-UNICODE
IGNORING CONVERSION ERRORS.
IF sy-subrc EQ 0. "PROGRAM FAILS HERE, SY-SUBRC eq 3
|
|
The program works when we select fewer data from DB, I have to provide the answer to the question: "why it fails when I grab a big amount of data.
Error in dialog mode :
Error in background mode :
UPDATE: this answer assumes that the original direction ("because of data volume") was based on a misinterpretation of what happened, because of a simple coincidence. It often happens, but I may be wrong of course. This assumption is based on the latest OP comment: "What i found interesting, that on the background job list, if there are 3 jobs for that user, two of them have failed and the target server was the 2nd one,but there is one job which succeeded in opening the file, his target system is system #1, but the difference is that that job had duration of ~1 hour and not 10 hours like two others.")
When you run a background job and there's an error opening a file from time to time, it may be due to the fact that you have an ABAP system with several application servers, and that one of them (at least) is not configured correctly to map a given folder to a "network" folder shared by all other application servers.
To make sure, you can see in which application server the failed job has been executed, by displaying its details (transaction code SM37). Then run the program twice, once in the application server where a job failed, once in the application server where a job succeeded, with the same input parameters.
It should succeed and fail accordingly.
To run a program in a given application server, there are two solutions:
Either start a job by indicating the desired target application server
Or switch your SAP GUI user session to the application server you want:
Use SM51 to display the list of all application servers
double click the concerned server
that opens the overview screen in a new user session started in that server
Enter /NSE38 in the command field and start the program in dialog (it will run in that server).
Now that it's almost certain this is the cause, you should ask the administrator to correct the issue, that in the given application server, he should add a "mapping" from the file folder to the shared folder (do the same as he did in other application servers).

How to autosave notebooks in Google colab?

I was recently working in a notebook on Google Colab and my computer ran out of battery and died. All the progress I had made was not saved anywhere!
I'm very used to having jupyter notebooks, which saves my files pretty much every time I execute a cell.
Is there a way to have an equivalent feature in Google Colab?
Autosave is already implemented in Google Colab, but there is a certain delay between the moment you execute a cell and when the save occurs.
You can try this yourself by going into File>Revision History, executing a cell, and waiting for the list to refresh.
That being said, I have also experienced loss of data in the past, which I can't explain. It might be a glitch.
As a good practice, I try to save every time I remember.
Good luck.
Autosave every 60 seconds by running this "magic command" into a new code cell :
%autosave 60
Colab will confirm it when you run the cell with printing : "Autosave changes every 60 seconds"
To display the list of all magic commands you can use the command :
%lsmagic
Additionally, you can call the Quick Reference Guide, describing all the magic commands and what they do using the command :
%quickref
Enjoy!

TensorBoard doesn't do anything upon execution

Whenever I execute TensorBoard I just get:
Starting TensorBoard 54 at http://localhost:6006
(Press CTRL+C to quit)
and then nothing happens. Any advice on how to get the graph to show?
EDIT: Sorry I meant to clarify that I copy and paste "http://localhost:6006" into my browser and "No scalar data was found" appears.
FIXED: I was not typing the correct log directory. For anyone in the future who has this problem, don't be like me and assume that TensorBoard automatically reads through the /tmp directory.
One possible reason might be not giving the correct log directory, and at other times people often forget to write back summaries.

Good data - debugging a graph (grf file)

I've got a graph that isn't behaving as it should in CloudConnect.
I'm running it locally, and it's completing, but not doing its work.
In an effort to figure out why this is, I've added printLog calls in many places, like the following
printLog(warn, 'transfrom from file ' + $in.0.fileName);
printLog(debug, 'joining etc');
The Phase consists of a FileList into a SimpleCopy, into a LookupJoin, a Reformat (produce SQL) and a DBInsert.
However, while I see logs for phases above, I'm not seeing anything produced in the log for any part of my phase. All parts of the phase do report running successfully in log. I've also done Enable Debugging on all connections in this phase.
Am I missing something to enable logging? Is there a better way to debug processing in CloudConnect?
Discovered the problem - the FileList will succeed if the source file cannot be found, but none of the subsequent steps will then fire. It's somewhat unintuitive, since the log files says 'succeeded'.
For debugging, after run you can access the data by right clicking on the connection, and selecting "View Data"
Sorry for the elementary question, but documentation didn't seem to cover this clearly, at least for a GoodData noob. I'll leave it up for anyone with the same problem!

What would cause a Windows Installer package to lose its progress status?

I have a WiX installer project which has recently been producing installers that don't show any file installation progress. That stage takes around 30 seconds to complete, and users may think that the install has hung since the progress bar remains empty until the install suddenly completes.
I know there used to be a progress bar for the installation, but not when the change occurred.
I'll start to narrow down what change caused this, but are there any common causes for this problem?
Did you create any custom actions? If so, be aware that you need to implement ProgressText messages and / or pump messages to update the ticks on the ProgressBar control.
Generate a verbose log file. The log file will have timestamps for all the actions (look for Action Start: type messages). A quick script can shred the log file and give you the times between actions. I did this for a few months at the end of Office2003 as we tracked down performance issues in the install. I wish I still had the VBScript (yeah, yeah, I know) that I wrote to get me the numbers.
It almost always comes down to custom acitons.