Graphlab Create setup error: graphlab.get_dependencies() results in BadZipFile error - graphlab

After installing Graphlab Create on Win 10, it asks us to install 2 dependencies using graphlab.get_dependencies().
However, I am getting the following error:
In [9]: gl.get_dependencies()
By running this function, you agree to the following licenses.
* libstdc++: https://gcc.gnu.org/onlinedocs/libstdc++/manual/license.html
* xz: http://git.tukaani.org/?p=xz.git;a=blob;f=COPYING
Downloading xz.
Extracting xz.
---------------------------------------------------------------------------
BadZipfile Traceback (most recent call last)
in ()
----> 1 gl.get_dependencies()
C:\Users\nikulk\Anaconda2\envs\gl-env\lib\site-packages\graphlab\dependencies.pyc in get_dependencies()
34 xzarchive_dir = tempfile.mkdtemp()
35 print('Extracting xz.')
---> 36 xzarchive = zipfile.ZipFile(xzarchive_file)
37 xzarchive.extractall(xzarchive_dir)
38 xz = os.path.join(xzarchive_dir, 'bin_x86-64', 'xz.exe')
C:\Users\nikulk\Anaconda2\envs\gl-env\lib\zipfile.pyc in __init__(self, file, mode, compression, allowZip64)
768 try:
769 if key == 'r':
--> 770 self._RealGetContents()
771 elif key == 'w':
772 # set the modified flag so central directory gets written
C:\Users\nikulk\Anaconda2\envs\gl-env\lib\zipfile.pyc in _RealGetContents(self)
809 raise BadZipfile("File is not a zip file")
810 if not endrec:
--> 811 raise BadZipfile, "File is not a zip file"
812 if self.debug > 1:
813 print endrec
BadZipfile: File is not a zip file
Anyone knows how to resolve?

If you get this error, a firewall might be blocking you from downloading a dependency. Here is some information and a work around:
Please see the SFrame source code for get_dependencies to see how GraphLab uses this package: https://github.com/turicode/SFrame/blob/master/oss_src/unity/python/sframe/dependencies.py
The xz utility is only used to extract runtime dependencies from the other file downloaded there (from repo.msys2.org): http://repo.msys2.org/mingw/x86_64/mingw-w64-x86_64-gcc-libs-5.1.0-1-any.pkg.tar.xz. Two DLLs from that file need to be extracted into the "cython" directory inside the GraphLab Create install path (typically something like lib/site-packages/python2.7/graphlab within a virtualenv or conda env). Once extracted the dependency issue should be resolved.

In graphlab folder make the folder writable.Initially it is only readable.Go to properties of folder undo the only read option.Hope it solve your problem.

Related

Colab FileNotFoundError - Stable Diffusion Unfiltered

so I'm a complete nawb to this. I googled the problem but found only a bunch of unrelated entries.
I'm trying to run this (stable-diffusion):
https://colab.research.google.com/drive/1uWCe41_BSRip4y4nlcB8ESQgKtr5BfrN#scrollTo=lTRtiZZk0h5d
And following a guide "for better RAM usage" replaced:
https://github.com/CompVis/stable-diffusion.git
with:
https://github.com/chemistzombie/stable-diffusion-unfiltered.git
And am now getting following error code:
Cloning into 'stable-diffusion-unfiltered'...
remote: Enumerating objects: 323, done.
remote: Total 323 (delta 0), reused 0 (delta 0), pack-reused 323
Receiving objects: 100% (323/323), 42.64 MiB | 37.09 MiB/s, done.
Resolving deltas: 100% (109/109), done.
FileNotFoundError Traceback (most recent call last)
in
2 get_ipython().system('git clone https://github.com/chemistzombie/stable-diffusion-unfiltered.git')
3 import os
----> 4 os.chdir('stable-diffusion')
FileNotFoundError: [Errno 2] No such file or directory: 'stable-diffusion'
I checked the link and the replacement file still exists.
I looked for any typos, checked if the linked file still exists and googled any possibly already available troubleshoot.
Anyone familiar with this?
I encountered the same problem. In the "Download the Repository" section, you'll need to add "-unfiltered" after stable-diffusion. For me, it threw another error when trying to view the images. Again, need to add "-unfiltered" after stable-diffusion if you get that problem.
Clone the repo
!git clone https://github.com/chemistzombie/stable-diffusion-unfiltered.git
import os
os.chdir('stable-diffusion-unfiltered')

saleae logic 2 package in notebook

it is my first time with saleae.
I’ve installed it on my windows machine and launch a notebook. My problem
is that I can’t create an Saleae object. Here is my code
import saleae
from saleae import Saleae
s = Saleae()
I’m having this error message:
INFO:saleae.saleae:Could not connect to Logic software, attempting to launch it now
Output exceeds the size limit. Open the full output data in a text editor
ConnectionRefusedError Traceback (most recent call last) File ....\lib\site-packages\saleae\saleae.py:211, in Saleae.init(self, host, port, quiet, args) 210 self._s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) → 211 self._s.connect((host, port)) 212 except ConnectionRefusedError: ConnectionRefusedError: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée
how can I solve the issue ?
I found the solution by reverting back to Logic 1.2.40.

Google Drive in Colaboratory is not mounted. TimeOut

since several days, i can not connect to google drive in google colaboratory.
here is the code, that before always worked:
from google.colab import drive
drive.mount('/content/gdrive/')
I tried the following, after reading several blogs for a solution:
-> I only have two items in my first folder of google drive
-> My trash folder is empty
-> I reset all the sessions
-> I create new file for testing just the piece of code above.
-> I restarted my computer and my browser Chrome
-> I tried "drive.mount('/content/gdrive')" , drive.mount('/content/gdrive/'), drive.mount('/content/'), drive.mount('/content'), drive.mount('/content/gdrive/My Drive')
Any idea?
Thanks a lot!
here is the error:
TIMEOUT Traceback (most recent call last)
<ipython-input-2-9a9a89271754> in <module>()
1 from google.colab import drive
----> 2 drive.mount('/content/gdrive/')
4 frames
/usr/local/lib/python3.6/dist-packages/pexpect/expect.py in timeout(self, err)
142 exc = TIMEOUT(msg)
143 exc.__cause__ = None # in Python 3.x we can use "raise exc from None"
--> 144 raise exc
145
146 def errored(self):
TIMEOUT: <pexpect.popen_spawn.PopenSpawn object at 0x7f5666fe0a90>
searcher: searcher_re:
0: re.compile('google.colab.drive MOUNTED')
1: re.compile('root#2cd8a6fe3c15-de18aaee18634b4c829aedf956090476: ')
2: re.compile('(Go to this URL in a browser: https://.*)$')
3: re.compile('Drive File Stream encountered a problem and has stopped')
4: re.compile('drive EXITED')
<pexpect.popen_spawn.PopenSpawn object at 0x7f5666fe0a90>
searcher: searcher_re:
0: re.compile('google.colab.drive MOUNTED')
1: re.compile('root#2cd8a6fe3c15-de18aaee18634b4c829aedf956090476: ')
2: re.compile('(Go to this URL in a browser: https://.*)$')
3: re.compile('Drive File Stream encountered a problem and has stopped')
4: re.compile('drive EXITED')
This is being tracked in https://github.com/googlecolab/colabtools/issues/1540.
Workaround is to copy the oauth code using a mouse-drag instead of using the "copy" button.
It looks like someone suggested manually copy-pasting the authentication code generated instead of pressing the copy button and then pasting. This worked for me :)
Hi i copied the authentication code on to notepad and then pasted it from there onto the colab. It worked.

Google colab unable to work with hdf5 files

I have 4 hdf5 files in my drive. While using colab, db=h5py.File(path_to_file, "r") works sometimes and doesn't rest of the time. While writing the hdf5 file, I have ensured that I closed it after writing. Say File1 works on notebook_#1, when I try to use it on notebook_#2 it works sometimes, and doesn't other times. When I run it again on notebook_#1 it may work or maynot.
Probably size is not a matter because my 4 files are 32GB and others 4GB and mostly the problem is with 4GB files.
The hdf5 files were generated using colab itself. The error that I get is:
OSError: Unable to open file (file read failed: time = Tue May 19 12:58:36 2020
, filename = '/content/drive/My Drive/Hrushi/dl4cv/hdf5_files/train.hdf5', file descriptor = 61, errno = 5, error message = 'Input/output error', buf = 0x7ffc437c4c20, total read size = 8, bytes this sub-read = 8, bytes actually read = 18446744073709551615, offset = 0
or
/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
171 if swmr and swmr_support:
172 flags |= h5f.ACC_SWMR_READ
--> 173 fid = h5f.open(name, flags, fapl=fapl)
174 elif mode == 'r+':
175 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/h5f.pyx in h5py.h5f.open()
OSError: Unable to open file (bad object header version number)
Would be grateful for any help, thanks in advance.
Reading directly from Google Drive can cause problems.
Try copying it to local directory e.g. /content/ first.

humpy.genfromtxt input fname argument from list

I have a number of text files as output from a calculation from which I wish to extract data:
(Note: Because some of the files are rather mangled, I have placed copies in my Dropbox. The URL is https://www.dropbox.com/sh/h774f8jzjb5l0wx/AAAqhvHsmPAhK_svdQG2Ou9Ha?dl=0)
=======================================================================
PSOVina version 2.0
Giotto H. K. Tai & Shirley W. I. Siu
Computational Biology and Bioinformatics Lab
University of Macau
Visit http://cbbio.cis.umac.mo for more information.
PSOVina was developed based on the framework of AutoDock Vina.
For more information about Vina, please visit http://vina.scripps.edu.
=======================================================================
Output will be 14-7_out.pdbqt
Reading input ... done.
Setting up the scoring function ... done.
Analyzing the binding site ... done.
Using random seed: 768314908
Performing search ... done.
Refining results ... done.
mode | affinity | dist from best mode
| (kcal/mol) | rmsd l.b.| rmsd u.b.
-----+------------+----------+----------
1 -9.960902669 0.000 0.000
2 -8.979504781 1.651 2.137
3 -8.942611364 3.051 6.898
4 -8.915523010 2.146 2.875
5 -8.736508831 2.908 7.449
6 -8.663387139 2.188 2.863
7 -8.410739711 5.118 7.281
8 -8.389146347 2.728 3.873
9 -8.296798909 2.416 3.846
10 -8.168454106 3.809 8.143
11 -8.127990818 3.712 8.109
12 -8.127103774 3.084 4.097
13 -7.979090739 3.798 4.959
14 -7.941872682 4.590 8.294
15 -7.900766215 3.300 8.204
16 -7.881485228 2.953 4.224
17 -7.837826485 3.005 4.125
18 -7.815909505 4.390 8.782
19 -7.722540286 5.695 9.851
20 -7.720346742 3.362 4.593
Writing output ... done.
This works:
import numpy as np
print('${d}')
data = np.genfromtxt("14-7.log", usecols=(1), skip_header=27,
skip_footer=1, encoding=None)
print(data)
np.savetxt('14-7.dG', data, fmt='%12.9f', header='14-7')
print(data)
which produces:
runfile('/home/comp/Apps/Python/PsoVina/DeltaGTable_V_s.py',
wdir='/home/comp/Apps/Python/PsoVina', current_namespace=True)
${d}
[-9.96090267 -8.97950478 -8.94261136 -8.91552301 -8.73650883 -8.66338714
-8.41073971 -8.38914635 -8.29679891 -8.16845411 -8.12799082 -8.12710377
-7.97909074 -7.94187268 -7.90076621 -7.88148523 -7.83782648 -7.8159095
-7.72254029 -7.72034674]
[-9.96090267 -8.97950478 -8.94261136 -8.91552301 -8.73650883 -8.66338714
-8.41073971 -8.38914635 -8.29679891 -8.16845411 -8.12799082 -8.12710377
-7.97909074 -7.94187268 -7.90076621 -7.88148523 -7.83782648 -7.8159095
-7.72254029 -7.72034674]
Note; the print statements are for a quick check o the output, which is:
# 14-7
-9.960902669
-8.979504781
-8.942611364
-8.915523010
-8.736508831
-8.663387139
-8.410739711
-8.389146347
-8.296798909
-8.168454106
-8.127990818
-8.127103774
-7.979090739
-7.941872682
-7.900766215
-7.881485228
-7.837826485
-7.815909505
-7.722540286
-7.720346742
Also, this bash script works:
#!/bin/bash
# Run.dG.list_1
while IFS= read -r d
do
echo "${d}.log"
done <ligand.list
which returns the three log file names:
14-7.log
15-7.log
18-7.log
But, if I run this bash script:
#!/bin/bash
# Run.dG.list_1
while IFS= read -r d
do
echo "${d}.log"
python3 DeltaGTable_V_sl.py
done <ligand.list
where DeltaGTable_V_sl.py is:
import numpy as np
print('${d}')
data = np.genfromtxt('${d}.log', usecols=(1), skip_header=27,
skip_footer=1, encoding=None)
print(data)
np.savetxt('${d}.dG', data, fmt='%12.9f', header='${d}')
print(data.dG)
I get:
(base) comp#AbNormal:~/Apps/Python/PsoVina$ sh ./Run.dG.list_1.sh
14-7.log
python3: can't open file 'DeltaGTable_V_sl.py': [Errno 2] No such file
or directory
15-7.log
python3: can't open file 'DeltaGTable_V_sl.py': [Errno 2] No such file
or directory
18-7.log
python3: can't open file 'DeltaGTable_V_sl.py': [Errno 2] No such file
or directory
C-VX3.log
python3: can't open file 'DeltaGTable_V_sl.py': [Errno 2] No such file
or directory
So, it would appear that the log file labels are in the workspace, but
'${d}.log' is not being recognized as fname by genfromtxt. Although I
have googled every combination of terms I can think of I am obviously
missing something.
As I have potentially hundreds of files to process, I would appreciate
pointers towards a solution to the problem.
Thanks in advance.
Python does no now know ${d} as used in the shell script.
If you want do use a command-line argument passed to your Python script you can use argparse or the sys module.
argparse is a more mighty, so you could first try sys:
sys.argv[0] # name of the Python script.
sys.argv[1] # command line arguments 1
sys.argv[n] # command line arguments n
See here.
I can create your error message with:
0029:~/mypy$ python3 foobar
python3: can't open file 'foobar': [Errno 2] No such file or directory
foobar is a random name, and clearly not present in the Python path.
So you haven't even started DeltaGTable_V_sl.py, much less run into problems with genfromtxt. So most of your question isn't relevant.