Google Colab Unzip error (start of central directory not found . . .) - google-colaboratory

Archive: /......zip
warning [/......zip]: 267811291 extra bytes at beginning or within zipfile
(attempting to process anyway)
error [......zip]: start of central directory not found;
zipfile corrupt.
(please check that you have transferred or created the zipfile in the
appropriate BINARY mode and that you have compiled UnZip properly)
Any solution about that error?
I saw same question about 8 months ago and found 0 answer. So I need to ask again.

Related

Trying to download zip file from drive to instance in colab hits quota limit

I am trying to download the zip file dataset from drive using
from google.colab import drive
drive.mount('/content/drive')
!cp "/content/drive/Shared Drives/infinity-drive/datasets/coco.zip" ./data
I get the error that one of the quota limits is exceeded for 3 days in consecutively and file copying ends in I/O error.
I went over all the solutions. File size is 18GB. Although I could not understand the directive
Use drive.google.com to download the file.. What does it have to do with triggering limis in colab? For people trying to download file to colab instance and then to their local machine? Any way/
The file is in archive format.
The folder it is in has no any other files in it.
The file is private, although I am not the manager of the shared drive.
I am baffled at exactly which quota I am hitting. It does not say to me but I am 99% sure I should not be hitting any considering I can download the entire thing to my local machine. I can't keep track of exactly what is happening because even within the same day, it is able to copy about 3-5 GB of the file(even the file size is changing).
This is a bug in colab/Drive integration, and is tracked in #1607. See this comment for workarounds.

'gunzip' is not recognized as an internal or external command, operable program or batch file. System command 'gunzip' failed

I am trying to analyse my raw GNSS data on the GNSS Analyser app from here https://github.com/google/gps-measurement-tools. The installation guide includes the following step:
4.2 gunzip installation
The automatic ftp code inside GnssAnalysis will download ephemeris zip files, and attempt to
unzip them using gunzip.
Download gzip.exe from here http://ftp.gnu.org/gnu/gzip/gzip-1.9.zip
Extract the files from the zip file, rename gzip.exe to gunzip.exe
Move gunzip.exe to somewhere in your Windows path (type path in the Windows
Command Prompt to see what your path is, typically you will find a directory
C:\Windows\system32 and you can put gunzip.exe there.)
However, upon downloading gunzip, I cant find a gzip.exe file, and hence tried renaming the gzip.c and gzip.h file instead. It did not work and I got this error when attempting to process my own raw data.
I have just tried and got success to import DB from a backup file:
gzip -d < C:\Users\my-user\Downloads\my-db-backup.sql.gz | mysql -u root -p MY_DB_NAME

Aerospike docker - 100L, 'UDF: Execution Error 1

I deployed an Aerospike container using the official docker hub image. When I try to execute test_list = client.llist(key, 'test_list'), my Python client script returns the following error:
exception.UDFError: (100L, 'UDF: Execution Error 1', 'src/main/llist/llist_operations.c', 93)
I looked at the Aerospike logs and found that each time this code is executed, the error below gets printed:
: WARNING (udf): (src/main/mod_lua.c:599) Lua Create Error: module 'llist' not found:
no field package.preload['llist']
no file './llist.lua'
no file '/usr/local/share/luajit-2.0.3/llist.lua'
no file '/usr/local/share/lua/5.1/llist.lua'
no file '/usr/local/share/lua/5.1/llist/init.lua'
no file '/opt/aerospike/sys/udf/lua/llist.lua'
no file '/opt/aerospike/sys/udf/lua/external/llist.lua'
no file '/opt/aerospike/usr/udf/lua/llist.lua'
no file './llist.so'
no file '/usr/local/lib/lua/5.1/llist.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file '/opt/aerospike/sys/udf/lua/llist.so'
no file '/opt/aerospike/sys/udf/lua/external/llist.so'
no file '/opt/aerospike/usr/udf/lua/llist.so'
: INFO (udf): (udf.c:954) lua error, ret:1
I could not find the relevant lua files or a lua installation in the container. I have my code working fine when I run it directly on the host. Is there some extra configuration that needs to be done to the container?
LDTs were dropped in 3.15.
https://www.aerospike.com/docs/guide/ldt_guide.html
Excerpt:
Aerospike has removed the Large Data Type feature as of server version 3.15 after deprecating this functionality 12 months earlier. Please see the removal notice and deprecation notice. The features listed below are no longer in Aerospike servers.

Fully un-installing google cloud sdk

I tried to uninstall google cloud sdk from my computer a long time ago but when ever I login to terminal I get this message:
Last login: Sat Sep 9 12:40:05 on console
You have new mail.
-bash: /Users/me/google-cloud-sdk/path.bash.inc: No such file or directory
-bash: /Users/me/google-cloud-sdk/completion.bash.inc: No such file or directory
ME:~ myshell$
I have tried this answer but the problem is I don't have a google cloud account anymore.
I also reach a dead end on this one as well b/c none of the files or folder seem to exist on my computer.
How do I resolve this?
You will have to edit your .bashrc and remove references to gloud-cloud-sdk look for lines like
source "$CLOUD_SDK/completion.bash.inc"
source "$CLOUD_SDK/paths.bash.inc"
Also, you may want to remove ~/.config/gcloud

Unable to open a saved Gephi project file

Recently I worked on a project done in the network visualization and analysis software Gephi, and I saved it with the ".gephi" extension. However, when I try to reopen the file, it gives the following error message:-
"The project file couldn't be opened. Please check the file has .gephi extension.
XMLStreamException - ParseError at [row,col]:[1,1]
Message: Premature end of file."
I'm a beginner in Gephi and only an amateur programmer. I do not understand this error message, and thus have no ideas on how to resolve it. I tried updating Gephi to the latest version. I also tried to open the file from within Gephi. Neither of those steps have resolved the problem. Can anyone help me out with this, please?
The error message "premature end of file" means that the xml file was not complete. I suppose that the whole file is empty or just the xml part of the file. so maybe the file got corrupted while saving.
Can you try to open the file with notepad or a hexeditor to verify that it has some content?
There must be some bug on the gephi files writing or reading process.
In order to identify the problem it would help if you can post a gephi log file when each error happens.
You can find the log file on gephi user directory (check http://wiki.gephi.org/index.php/Troubleshooting)
For example in Windows 7 the path is C:\Users\Your_User\AppData\Roaming.gephi\dev\var\log\messages.log
Also, if you can share the files, it will be easier to fix.
This could be related to an open bug where Java6 is used to save the gephi file and then Java7 is used to load the file, say on a different machine.
The jdk used by Gephi can be specified in /etc/gephi.conf or alternatively it can be specified as a parameter --jdkhome when launching Gephi.
The problem is with java and javac:
If you created your gephi file with open java-6-openjdk (for example) and then you sitch your java to java-7-openjdk, then this problem surges.
I fix my gephi returning to the same java and javac executables in Linux by:
(In terminal)
sudo update-alternatives --config java
and then
(In terminal)
sudo update-alternatives --config javac
Hope this can help!