File uploding failed when i was uploding the file on google-compute instance, now it's showing xyz.zip.csupload. Can i resume it or retry it? - file-upload

Instance operating system is ubuntu 16.04.
I was uploading using the instance upload file option.
File size was 2.24 GB.
I didn't find anything useful on internet.
Thanks

The file "xyz.zip.ccsupload" is the file with the partial upload. Once the upload is complete, then the file will have the proper name. You cannot resume the upload from where it left off. If it fails, then you will have to attempt uploading the file again.
The reason why it failed is most likely due to the file size. Due to the size of the file, I would suggest using the "gcloud compute scp" command to upload the file to the VM instance as documented here.

Related

Trying to download zip file from drive to instance in colab hits quota limit

I am trying to download the zip file dataset from drive using
from google.colab import drive
drive.mount('/content/drive')
!cp "/content/drive/Shared Drives/infinity-drive/datasets/coco.zip" ./data
I get the error that one of the quota limits is exceeded for 3 days in consecutively and file copying ends in I/O error.
I went over all the solutions. File size is 18GB. Although I could not understand the directive
Use drive.google.com to download the file.. What does it have to do with triggering limis in colab? For people trying to download file to colab instance and then to their local machine? Any way/
The file is in archive format.
The folder it is in has no any other files in it.
The file is private, although I am not the manager of the shared drive.
I am baffled at exactly which quota I am hitting. It does not say to me but I am 99% sure I should not be hitting any considering I can download the entire thing to my local machine. I can't keep track of exactly what is happening because even within the same day, it is able to copy about 3-5 GB of the file(even the file size is changing).
This is a bug in colab/Drive integration, and is tracked in #1607. See this comment for workarounds.

uploading large file to GraphDB

I want to upload whole of the Dbpedia data set to the GraphDB. I have installed am using Docker container and have run it successfully. Now the problem is, I am unable to upload the .nt files because the maximum size of the file allowed is 200 MB. Can anybody help?
You can surpass the limit by using Import -> Server files. You can change the directory of the server files by passing -Dgraphdb.workbench.importDirectory=path_to_directory when starting GraphDB.

PyCharm failed when trying to read a big .csv file while spyder succeeded

I have both pycharm and spyder installed on our remote desktop. I personally prefer pycharm and having been developing using it.
Everything is fine until I found that I can not read a .csv file which is bigger than 1 GB in pycharm, it told me python crashed and the log shows "Process finished with exit code -1073740940 (0xC0000374)".
At first I thought maybe the .csv file has been broken. But in spyder, the pandas succeeded reading it. So the .csv file is good.
I do not know why, I did not change the configuration of my PyCharm, maybe someone did, because there are several other colleagues who also have access to this remote desktop.
I am somewhat sure that the file is OK and I had been reading .csv files which are even bigger than 20GB in pycharm using pandas, so anybody have any idea about this?
Try changing the
idea.max.content.load.filesize=20000
line in the ide.properties. It controls the
Maximum file size (kilobytes) IDE is able to open.
This should open a 30mb file.
You can get to your properties file from Help > Edit Custom Properties.
idea.max.intellisense.filesize=30000
idea.max.content.load.filesize=30000
You may need to increase the total memory allotment in pycharm after you do this. I know for me, pycharm slowed to a crawl while trying to read in a file this big.
More details on these settings: https://www.jetbrains.com/help/pycharm/file-idea-properties.html

Failed to download .ckpt weights from Google Colab

I've trained a Tensorflow model on Google Colab, and saved that model in ".ckpt" format.
I want to download the model so I tried to do this:
from google.colab import files
files.download('/content/model.ckpt.index')
files.download('/content/model.ckpt.meta')
files.download('/content/model.ckpt.data-00000-of-00001')
I was able to get meta and index files. However, data file is giving me the following error:
"MessageError: Error: Failed to download: Service Worker Response
Error"
Could anybody tell me how should I solve this problem.
Google Colab doesn't allow downloading files of large sizes (not sure about the exact limit). Possible solutions could be to either split the file into smaller files or can use github to push your files and then download to your local machine.
I just tried with a 17 Mb graph file using the same command syntax with no error. Perhaps a transient problem on Google's servers?
For me it helped to rename the file before download.
I had a file named
26.9766_0.5779_150-Adam-mean_absolute_error#3#C-8-1-....-RL#training-set-6x6.04.hdf5
and renamed it to
model.hdf5
before download, then it worked. Maybe the '-' in the filename caused the error in my case.

Unable to open a saved Gephi project file

Recently I worked on a project done in the network visualization and analysis software Gephi, and I saved it with the ".gephi" extension. However, when I try to reopen the file, it gives the following error message:-
"The project file couldn't be opened. Please check the file has .gephi extension.
XMLStreamException - ParseError at [row,col]:[1,1]
Message: Premature end of file."
I'm a beginner in Gephi and only an amateur programmer. I do not understand this error message, and thus have no ideas on how to resolve it. I tried updating Gephi to the latest version. I also tried to open the file from within Gephi. Neither of those steps have resolved the problem. Can anyone help me out with this, please?
The error message "premature end of file" means that the xml file was not complete. I suppose that the whole file is empty or just the xml part of the file. so maybe the file got corrupted while saving.
Can you try to open the file with notepad or a hexeditor to verify that it has some content?
There must be some bug on the gephi files writing or reading process.
In order to identify the problem it would help if you can post a gephi log file when each error happens.
You can find the log file on gephi user directory (check http://wiki.gephi.org/index.php/Troubleshooting)
For example in Windows 7 the path is C:\Users\Your_User\AppData\Roaming.gephi\dev\var\log\messages.log
Also, if you can share the files, it will be easier to fix.
This could be related to an open bug where Java6 is used to save the gephi file and then Java7 is used to load the file, say on a different machine.
The jdk used by Gephi can be specified in /etc/gephi.conf or alternatively it can be specified as a parameter --jdkhome when launching Gephi.
The problem is with java and javac:
If you created your gephi file with open java-6-openjdk (for example) and then you sitch your java to java-7-openjdk, then this problem surges.
I fix my gephi returning to the same java and javac executables in Linux by:
(In terminal)
sudo update-alternatives --config java
and then
(In terminal)
sudo update-alternatives --config javac
Hope this can help!