I mounted to Google drive account A. Now I want to switch to account B but I cannot do that, because there is no way for me to enter a new authentication key when executing drive.mount().
What I have tried and failed:
restart browser, restart computer
use force_remount=True in drive.mount(), it will only automatically remount account A. Not asking me for new mounting target.
change account A password
change run-time type from GPU to None and back to GPU
open everything in incognito mode
sign out all google accounts
How can I:
forget previous authentication key so it will ask me for a new one?
dismount drive and forget previous authentication key?
I have found the 'Restart runtime...' not to work, and changing permission too much of a hassle.
Luckily, the drive module is equipped with just the function you need:
from google.colab import drive
drive.flush_and_unmount()
You can reset your Colab backend by selecting the 'Reset all runtimes...' item from the Runtime menu.
Be aware, however, that this will discard your current backend.
Another solution for your problem could be to terminate your session and run your code (drive.mount()) again.
Steps:
1) Press "Additional connection options" button. Is the little sign button next to RAM and DISK
2) Select "Manage sessions"
3) Press the "Terminate" button
4) Run again your code (drive.mount()).
Now you will be asked to put your new key.
To force Colab to ask for a new key without waiting or resetting the runtime you can revoke the previous key.
To do this:
go to https://myaccount.google.com/permissions (or manually navigate to Security → Manage third-party access on your Google account page),
on the top right, select your profile image or initial, and then select the account whose drive you want do disconnect from Colab,
select Google Drive File Stream in the Google apps section, then select Remove access.
Executing drive.mount() will now ask for a new key.
Remount does not work when you have recently mounted and unmounted using flush_and_unmount(). The correct steps you should follow is (which worked for me at the time of posting):
After mounting using:
from google.colab import drive
drive.mount('/content/drive')
Unmount using: drive.flush_and_unmount() and you cannot see the 'drive/' folder but TRUST me you should use !rm -rf /content/drive before remounting the drive using:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
And you will again get the authorise request for a new Gmail account.
You can terminate the session in Runtime -> manage session. That should do the work and you can remount the drive again.
Restarting runtimes and removing access did not help. I discovered that the notebook I was using created directories on the mountpoint:
from google.colab import drive
drive.mount('/content/drive')
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
I had to first remove the subdirectories on the mountpoint. First ensure that your drive is not actually mounted!
!find /content/drive
/content/drive
/content/drive/My Drive
/content/drive/My Drive/Colab Notebooks
/content/drive/My Drive/Colab Notebooks/assignment4
/content/drive/My Drive/Colab Notebooks/assignment4/output_dir
/content/drive/My Drive/Colab Notebooks/assignment4/output_dir/2020-04-05_16:17:15
The files and directories above was accidentally created by the notebook before I've mounted the drive. Once you are sure (are you sure?) your drive is not mounted then delete the subdirectories.
!rm -rf /content/drive
After this, I was able to mount the drive.
Here is an explanation from their FAQs.
Why do Drive operations sometimes fail due to quota?
Google Drive enforces various limits, including per-user and per-file operation count and bandwidth quotas. Exceeding these limits will trigger Input/output error as above, and show a notification in the Colab UI. A typical cause is accessing a popular shared file, or accessing too many distinct files too quickly. Workarounds include:
Copy the file using drive.google.com and don't share it widely so that other users don't use up its limits.
Avoid making many small I/O reads, instead opting to copy data from Drive to the Colab VM in an archive format (e.g. .zip or.tar.gz files) and unarchive the data locally on the VM instead of in the mounted Drive directory.
Wait a day for quota limits to reset.
https://research.google.com/colaboratory/faq.html#drive-quota
Symptom
/content/drive gets auto-mounted without mounting it and not being asked for Enter your authorization code:.
Cached old state of the drive kept showing up.
The actual Google drive content did not show up.
Terminating, restarting, factory resetting revoking permissions, clear chrome cache did not work.
Flush and unmount google.colab.drive.flush_and_unmount() did not work.
Solution
Create a dummy file inside the mount point /content/drive.
Take a moment and make sure the content /content/drive is not the same with that in the Google drive UI.
Run rm -rf /content/drive.
Run google.colab.drive.flush_and_unmount()
From the menu Runtime -> Factory reset runtime.
Then re-run google.colab.drive.mount('/content/drive', force_remount=True) finally asked for Enter your authorization code.
The current code for the drive.mount() function is found at https://github.com/googlecolab/colabtools/blob/fe964e0e046c12394bae732eaaeda478bc5fa350/google/colab/drive.py
It is a wrapper for the drive executable found at /opt/google/drive/drive. I have found that executable accepts a flag authorize_new_user which can be used to force a reauthentication.
Copy and paste the contents of the drive.py file into your notebook. Then modify the call to d.sendline() currently on line 189 to look like this (note the addition of the authorize_new_user flag):
d.sendline(
('cat {fifo} | head -1 | ( {d}/drive '
'--features=max_parallel_push_task_instances:10,'
'max_operation_batch_size:15,opendir_timeout_ms:{timeout_ms},'
'virtual_folders:true '
'--authorize_new_user=True '
'--inet_family=' + inet_family + ' ' + metadata_auth_arg +
'--preferences=trusted_root_certs_file_path:'
'{d}/roots.pem,mount_point_path:{mnt} --console_auth 2>&1 '
'| grep --line-buffered -E "{oauth_prompt}|{problem_and_stopped}"; '
'echo "{drive_exited}"; ) &').format(
d=drive_dir,
timeout_ms=timeout_ms,
mnt=mountpoint,
fifo=fifo,
oauth_prompt=oauth_prompt,
problem_and_stopped=problem_and_stopped,
drive_exited=drive_exited))
Call either the drive module version of flush_and_unmount() or the one you pasted in, and then call your version of mount() to login as a different user!
Related
Docker Desktop supports moving the VM Image that it uses onto another drive if needed. On my Mac Mini (2018) I have moved it to an external SSD in order to enlarge it more than the internal storage would have allowed.
The external SSD was named "Extra Space", which (ironically) became a problem when I also tried to use the SSD for other non-Docker development and discovered that some of the Ruby Gems I am using have problems with spaces in path names.
I decided that I would have to rename the drive to "ExtraSpace" (without the "extra" space), but then Docker was not able to find its VM Image. I was unable to use the built-in location chooser ("Preferences" -> "Resources" -> "Advanced" -> "Disk Image Location") because that tool assumes that it is moving the image from one location to another but in my case the image is not being moved, only the path to the existing image is changing.
I looked through the Docker configuration in ~/Library/Containers/com.docker.desktop/ and found the path to the image in Library/Containers/com.docker.docker/Data/vms/0/hyperkit.json. I tried changing it there, but Docker Desktop would not start.
In the error logs, I found lots of messages like this:
time="2021-10-31T15:06:43-04:00" level=error msg="(5487d9bc) 4ecbf40e-BackendAPI S->C f68f0c84-DriverCMD GET /vm/disk (925.402µs): mkdir /Volumes/Extra Space: permission denied"
common/cmd/com.docker.backend/internal/handlers.(*VMInitHandler).VMDiskInfo(0x58c13b8, {0x58b94a0, 0xc0001d82a0})
common/cmd/com.docker.backend/internal/handlers/vminit.go:39 +0x42
Why does Docker Desktop not follow the VM configuration file to find the location of the image? Is there somewhere else I have to change it?
After a lot more searching, I found the following additional files that need to be updated with the new path:
~/Library/Preferences/com.electron.docker-frontend.plist
~/Library/Preferences/com.electron.dockerdesktop.plist
~/Library/Group Containers/group.com.docker/settings.json
Once I had updated all of these files with the new path, Docker Desktop was able to start up correctly.
I am trying to download the zip file dataset from drive using
from google.colab import drive
drive.mount('/content/drive')
!cp "/content/drive/Shared Drives/infinity-drive/datasets/coco.zip" ./data
I get the error that one of the quota limits is exceeded for 3 days in consecutively and file copying ends in I/O error.
I went over all the solutions. File size is 18GB. Although I could not understand the directive
Use drive.google.com to download the file.. What does it have to do with triggering limis in colab? For people trying to download file to colab instance and then to their local machine? Any way/
The file is in archive format.
The folder it is in has no any other files in it.
The file is private, although I am not the manager of the shared drive.
I am baffled at exactly which quota I am hitting. It does not say to me but I am 99% sure I should not be hitting any considering I can download the entire thing to my local machine. I can't keep track of exactly what is happening because even within the same day, it is able to copy about 3-5 GB of the file(even the file size is changing).
This is a bug in colab/Drive integration, and is tracked in #1607. See this comment for workarounds.
DownloadError: ERROR: unable to download video data: HTTP Error 302: The HTTP server returned a redirect error that would lead to an infinite loop.
The last 30x error message was:
Found
Since yesterday I can't use my google drive files shared links with VideoColorizerColab.ipynb. I get this error above all the time I try to colorize my videos.
Does anyone know what's going on? Thank you, Géza.
You might want to try mounting your Google Drive to your colab and copying the video to the colab rather than using the link to download the video.
The code to mount your google drive to colab is
from google.colab import drive
drive.mount('/content/drive')
After this step, you can use all the content in your Drive as folders in your colab. You can see them in the Files section on the left side of your notebook. You can select a file, right-click and copy path and use the path to do any operation on the file.
This is an example of copying
!cp -r /content/gdrive/My\ Drive/headTrainingDatastructure/eval /content/models/research/object_detection/
Recently I changed the permissions of the file system and gave myself all the rights. I logged out of the system and I couldn't log back in. I got the error message
Could not update ICEauthority file /home/marundu/.ICEauthority</>
I did a live boot with a Fed 17 disc and replaced my .ICEauthority file with the live-user version and it worked for a time, until I logged out again. Now, the login progress screen is all that shows. I can log into command mode (Ctrl-Alt-F2) but I can't sudo - I get the error messages:
sudo:/usr/libexec/sudoers.so must be only writable by owner and sudo: fatal error. Unable to load plugins.
I just found a good link on Ubuntu:
Ask Ubuntu: ICEauthority permissions problem
Some things to note:
I tried the obvious things like changing file permissions, but found my whole home directory was somehow owned by root. I believe this was due to a failed package update.
I used a recovery disk (Knoppix ISO) for ease of use: Better UI
When mounting the bad home partition, I used the most common Linux file type (Ext4)
I used 'sudo mount -o r,w -t ext4 /dev/sda1 /mnt'
When changing ownership, I used the numeric user:group specification, since the recovery disk doesn't have the symbolic users and groups: 'sudo chown -R 1000:1000 /mnt/home/userdir'
I verified that /home/userdir had rwx for owner, r-x for group / other. This is noted as a valid set of permissions for ICEauthority; others can work. See the linked discussion.
Hope that helps someone...
I got the “Could not update ICEauthority file” error and found that my home partition was in "Read-Only" mode. Thus, this error made sense.
The real question was what caused the "Read-Only" attribute on the partition. I ran "dmesg | read-only" and found that there were serious errors with the file system on my home partition which the kernel had set to "read-only during the boot process.
I then booted from a USB key (CDROM would do as well) and ran "sudo fsck /dev/sdXY" where /dev/sdXY is the partition containing my home directory. fsck corrected a number of file system errors on my home partition.
I then reboot after removing the USB key/CDROM and the problem went away.
Bottom line: Check if your home partition has file system errors. They might be the cause of this error. If so, run fsck from an external device on the partition containing your home directory.
I have a Dlink NAS (dns-323) in RAID1 that I use to backup family photos, videos and some other data. I also manually rsync to a dedicated backup drive on a little Atom Linux box whenever we add a lot of new files to the NAS. I finally lost a drive on the NAS and through a misstep of my own, also lost the entire volume. No problem, that's what the backup drive is for. I used the same rsync command in reverse to restore files to the NAS after I replaced the bad drive and created a new RAID volume. This worked well, except that after the command finished, I noticed that it did not preserve timestamps. Timestamps were preserved in the NAS->backup direction, but not the backup->NAS direction.
I run the rsync command on the Atom Linux box with these options (this does preserve timestamps):
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dns-323 /mnt/dlink_backup --progress --verbose --itemize-changes
The reverse command to restore the volume from the backup (which did not preserve timestamps) is very similar:
rsync --archive --human-readable --inplace --numeric-ids --delete /mnt/dlink_backup/dns-323/ /mnt/dns-323/ --progress --verbose --itemize-changes
which actually restores the files, but gives many errors like:
rsync: failed to set times on "/mnt/dns-323/Rich/Code/.emacs": No such file or directory (2)
I've been googling most of the afternoon and trying different things, but so far haven't solved my problem. I used the 'touch' command to successfully modify the times of one or two files on the NAS, just to prove that it can be done since I believe that is one thing that rsync must do. I've tried doing this as my user and as root. By this I mean that I've run sudo rsync ..... as well as rsync --rsync-path='/usr/bin/sudo /usr/bin/rsync' ..... where ..... is all of the previously mentioned parameters. My /etc/fstab has these entries for the NAS and the backup drive, respectively:
# the dns-323
//192.168.1.202/Volume_1 /mnt/dns-323 cifs guest,rw,uid=1000,gid=1000,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0
# the dlink_backup drive
/dev/sdb /mnt/dlink_backup ext3 defaults 0 0
It's not absolutely critical to preserve timestamps if it just plain can't be done, but it seems like it should be possible - I'm just stumped.
Thanks in advance. Let me know if I can provide any additional information.
I've earned my "tumbleweed" badge as a result of this one. pats self on back
What I've learned:
My solution:
1) Removed the left hard drive from the dns-323, which is half of the RAID1 volume.
2) Mounted (ext3) this drive using a USB-to-SATA adapter to the machine where I run rsync.
3) Performed the rsync command for the restore outlined above. I removed the --delete option which really shouldn't be there and I added the option --size-only. The size-only option made it so that timestamps were essentially the only thing that got restored, since files had already restored properly.
4) Unmounted the left drive from the Atom machine and returned that drive to the dns-323, while also removing the right drive. The right drive needs to be removed so that the dns-323 recognizes that the RAID volume is degraded.
5) Re-add the right drive to the dns-323 and tell it to rebuild the RAID volume.
6) All timestamps are now good.
A possible alternate solution:
I've read enough about rsync and NFS/Samba/cifs now to understand that this problem is likely related to permissions on the NFS server (dns-323). Internally, the user/group ids in the dns-323 are 501/501. No permutation of how I mounted the dns-323 on the Atom box would allow rsync to properly set timestamps. I do believe that changing my user account on the Atom box to have uid/gid of 501/501 would have worked, though. My user had the default 1000/1000 and root had 0/0 IIRC.