Time out on drive.mount('/content/drive') in google colab - google-colaboratory

I am using google colab and there is always time out when I run the command
from google.colab import drive
drive.mount('/content/drive')
I have restarted runtime as well but nothing changed.
Although it was working yesterday.
Here is the error:
TIMEOUT: Timeout exceeded.
command: /bin/bash
args: [b'/bin/bash', b'--noediting']
buffer (last 100 chars): 'ZI [91298688] ui.cc:80:DisplayNotification Drive File Stream encountered a problem and has stopped\r\n'
before (last 100 chars): 'ZI [91298688] ui.cc:80:DisplayNotification Drive File Stream encountered a problem and has stopped\r\n'
after:
match: None
match_index: None
exitstatus: None
flag_eof: False
pid: 3135
child_fd: 76
closed: False
timeout: 120
delimiter:
logfile: None
logfile_read: None
logfile_send: None
maxread: 1000000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0.05
delayafterclose: 0.1
delayafterterminate: 0.1
searcher: searcher_re:
0: re.compile('google.colab.drive MOUNTED')
1: re.compile('root#32155b861949-0ddc780f6f5b40478d01abff0ab81cc1: ')
2: re.compile('(Go to this URL in a browser: https://.*)\r\n')`

A common cause of timeouts is having many thousands of files or folders in your root Drive directory.
If that's the case for you, my recommendation is to move some of these items into folders in your root directory so that the root has fewer items.
Under the covers, the way that Drive works requires listing the entire root directory to mount it as a FUSE filesystem, which takes time proportional to the number of files and folders you have, which leads to timeouts if you have many files and folders.

Why does drive.mount() sometimes fail to say "timed out", and why do I/O operations in drive.mount()-mounted Do folders sometimes fail?
Google Drive operations can time out when the number of files or sub-folders in a folder grows too large. If thousands of items are directly contained in the top-level "My Drive" folder then mounting the drive will likely time out. Repeated attempts may eventually succeed as failed attempts cache partial state locally before timing out. If you encounter this problem, try moving files and folders directly contained in "My Drive" into sub-folders. A similar problem can occur when reading from other folders after a successful drive.mount(). Accessing items in any folder containing many items can cause errors like OSError: [Errno 5] Input/output error (python 3) or IOError: [Errno 5] Input/output error (python 2). Again, you can fix this problem by moving directly contained items into sub-folders.
Note that "deleting" files or sub-folders by moving them to the Trash may not be enough; if that doesn't seem to help, make sure to also Empty your Trash. For your Reference

Can you check what you are pasting if its the token that is being generated?
I had this issue and the copy to the clipboard was copying the link, not the token.
you might want to manually copy it.

Related

Hash 'hashcat': Token length exception

hashcat64.exe hashcat -m0 -a0 crackme.txt password.txt
Device #1: Intel's OpenCL runtime(GPU only) is currently broken. We
are waiting for updated OpenCL drivers from Intel
Hash 'hashcat': Token length exception No hashes loaded.
I'm getting this message. I've attached a snapshot of my CL.
I've looked for any spaces in the hash directory and its format.
I've also tried changing all the Unicode formats of the .txt file.
Nothing seems to work. I've also updated the intel drivers.\
Can anyone help please. Thanks in advance.
I think you should look end of each line in your hash password containing files. If spaces are at there end of lines then you will get an error "token length exception" or "No hashes loaded". Just remove those spaces and then try.
For anyone looking into this : I used two rules, you can use many of others to increase the efficiency.
hashcat64.exe hashcat -m0 -a0 crackme.txt password.txt -r rules/best64.rule
or
hashcat64.exe hashcat -m0 -a0 crackme.txt password.txt -r rules/d3ad0ne.rule
This error can also occur if the hash file is not found. Note that the restore file effectively encodes the absolute path to the hash file, so this error can occur if it has moved when attempting to resume. (technically it saves the potentially relative path as specified when originally run, but it also saves the original working directory and cds there first)

USQL Job failing due to exceeding the path length limit

I am running my jobs locally using the Local SDK. However, I get the following error message:
Error : 'System.IO.PathTooLongException: The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
One of my colleagues was able to track down the error to the .ss file in the catalog folder inside DataRoot by running the project in a new directory in C:\. The path for the .ss file is
C:\HelloWorld\Main\Source\Data\Insights\NewProject\NewProject\USQLJobsForTesting.Tests\bin\Debug\DataRoot\_catalog_\database\d92bfaa5-dc7f-4131-abdc-22c50eb0d8c0\schema\f6cf4417-e2d8-4769-b633-4fb5dddcb066\table\aa136daf-9e86-4650-9cc3-119d607fb3b0\31a18033-099e-4c2a-aae3-75cf099b0fb1.ss
which exceeds the allowed limit of 260 characters. I cannot reduce the length of my project path because my organization follows a certain working directory format.
Is there any possible solution for this problem?
Try using subst in CMD to workaround this problem by mapping a drive letter to the data root you want to use.
subst X: C:\PathToYourDataRoot
And then in ADL Tools for Visual Studio set the DataRoot to X:

DATASET_CANT_CLOSE error number 32 "Broken Pipe"

I experienced an error in SAP ABAP which says DATASET_CANT_CLOSE with error number 32 (Broken Pipe). Question is: what procedure triggered this kind of error?
As far as I know, this error was triggered by:
CLOSE DATASET dset
But I can't reproduce the error since I don't know what procedure does trigger this kind of error.
This is the code I use:
method GENERATE_TXT_FILE.
DATA :
lwa_data TYPE t_line,
lv_param TYPE sxpgcolist-parameters.
"Upload File to Server
*Open Dataset
OPEN DATASET im_file_name FILTER 'dos2ux'
FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
CLEAR lwa_data.
LOOP AT it_data INTO lwa_data.
CATCH SYSTEM-EXCEPTIONS file_access_errors = 4
OTHERS = 8.
TRANSFER lwa_data-lines TO im_file_name.
ENDCATCH.
IF sy-subrc <> 0.
CLEAR lwa_data.
EXIT.
ENDIF.
CLEAR lwa_data.
ENDLOOP.
*Close Dataset
CLOSE DATASET im_file_name.
As I have investigated through the background job log, it seems that the current server which run the background job haven't got mapped yet to the text file folder. Solution is to re-map the server to text file folder.
You are using the FILTER extension to OPEN DATASET - which can be a HUGE security issue as well as raise loads of portability issues unless you know what you're doing, but that's not what the question is about. From the documentation:
When the statement OPEN DATASET is executed, a process is started in
the operating system for the specified statement. When the file is
opened for reading, a channel (pipe) is linked with STDOUT of the
process, from which the data is read during file reading. The file
itself is linked with STDIN of the process. When the file is opened
for writing, a channel (pipe) is linked to STDIN of the process, to
which data is passed when writing. The output of the process is
diverted to this file.
In your case, the filter command probably decided to bail out - see this answer among many. Why is hard to investigate - you may have to go through various system logs to find out. If the problem really is some unmapped network folder, you could try switching to UNC paths.

Internal error while loading to Bigquery table

I ran this command to load 11 files to a Bigquery table:
bq load --project_id=ardent-course-601 --source_format=NEWLINE_DELIMITED_JSON dw_test.rome_defaults_20140819_test gs://sm-uk-hadoop/queries/logsToBq_transformLogs/rome_defaults/20140819/23af7218-617d-42e8-884e-f213a583094a/part* /opt/sm-analytics/projects/logsTobqMR/jsonschema/rome_defaultsSchema.txt
I got this error:
Waiting on bqjob_r46f38146351d545_00000147ef890755_1 ... (11s) Current status: DONE
BigQuery error in load operation: Error processing job 'ardent-course-601:bqjob_r46f38146351d545_00000147ef890755_1': Too many errors encountered. Limit is: 0.
Failure details:
- File: 5: Unexpected. Please try again.
I tried many times after that and still got the same error.
To debug what went wrong, I instead load each file one by one to the Bigquery table. For example:
/usr/local/bin/bq load --project_id=ardent-course-601 --source_format=NEWLINE_DELIMITED_JSON dw_test.rome_defaults_20140819_test gs://sm-uk-hadoop/queries/logsToBq_transformLogs/rome_defaults/20140819/23af7218-617d-42e8-884e-f213a583094a/part-m-00011.gz /opt/sm-analytics/projects/logsTobqMR/jsonschema/rome_defaultsSchema.txt
There are 11 files total and each ran fine.
Could someone please help? Is this a bug on Bigquery side?
Thank you.
There was an error reading one of the files: gs://...part-m-00005.gz
Looking at the import logs, it appears that the gzip reader encountered an error decompressing the file.
It looks like that file may not actually be compressed. BigQuery samples the header of the first file in the list to determine whether it is dealing with compressed or uncompressed files and to determine the compression type. When you import all of the files at once, it only samples the first file.
When you run the files individually, bigquery reads the header of the file and determines that it isn't actually compressed (despite having the suffix '.gz') so imports it as a normal flat file.
If you run a load that doesn't mix compressed and uncompressed files, it should work successfully.
Please let me know if you think this is not the case and I'll dig in some more.

Creating image retention test im builder view

I just downloaded psychopy this morning and have spent the day trying to figure out how to work with builder view. I watched the youtube video "Build your first PsychoPy experiment (Stroop task)" by Jon Pierce. In his video he was explaining how to make a conditions file with excel that would be used in his experiment. I wanted to make a very similar test where images would appear and subjects would be required to give a yes or no answer to them (the correct answer is already predefined). In his conditions file he had the columns 'word' 'colour' and 'corrANS'. I was wondering if instead of a 'word' column, I can have an 'image' column. In this column I would like to upload all my images to them in the same way I would words, and have them correlated to a correct answer of either 'yes' or 'no'. We tried doing this and uploaded images to the conditions file, but we haven't had any success in running the test successfully and were hoping somebody could help us.
Thank you in advance.
P.S. we are not familiar with python, or code in general, so we were hoping to get this running using the builder view.
EDIT: Here is the error message we are receiving when running the program
#### Running: C:\Users\mr00004\Desktop\New folder\1_lastrun.py
4.8397 ERROR Couldn't find image file 'C:/Users/mr00004/Desktop/New folder/PPT Retention 1/ Slide102.JPG'; check path?
Traceback (most recent call last):
File "C:\Users\mr00004\Desktop\New folder\1_lastrun.py", line 174, in
image.setImage(images)
File "C:\Program Files (x86)\PsychoPy2\lib\site-packages\psychopy-1.80.03-py2.7.egg\psychopy\visual\image.py", line 271, in setImage
maskParams=self.maskParams, forcePOW2=False)
File "C:\Program Files (x86)\PsychoPy2\lib\site-packages\psychopy-1.80.03-py2.7.egg\psychopy\visual\basevisual.py", line 652, in createTexture
% (tex, os.path.abspath(tex))#ensure we quit
OSError: Couldn't find image file 'C:/Users/mr00004/Desktop/New folder/PPT Retention 1/ Slide102.JPG'; check path? (tried: C:\Users\mr00004\Desktop\New folder\PPT Retention 1\ Slide102.JPG)
Yes, certainly, that is exactly how PsychoPy is designed to work. Simply place the image names in a column in your conditions file. You can then use the name of that column in the Builder Image component's "Image" field. The appropriate image file for a given trial will be selected.
It is difficult to help you further, though, as you haven't specified what went wrong. "we haven't had any success" doesn't give us much to go on.
Common problems:
(1) Make sure you use full filenames, including extensions (.jpg, .png, etc). These aren't always visible in Windows at least I think, but they are needed by Python.
(2) Have the images in the right place. If you just use a bare filename (e.g. image01.jpg), then PsychoPy will expect that the file is in the same directory as your Builder .psyexp file. If you want to tidy the images away, you could put them in a subfolder. If so, you need to specify a relative path along with the filename (e.g. images/image01.jpg).
(3) Avoid full paths (starting at the root level of your disk): they are prone to errors, and stop the experiment being portable to different locations or computers.
(4) Regardless of platform, use forward slashes (/) not backslashes (\) in your paths.
make a new folder in H drive and fill in the column of image in psychopy as e.g. 'H:\psych\cat.jpg' it works for me