Colab FileNotFoundError - Stable Diffusion Unfiltered - google-colaboratory

so I'm a complete nawb to this. I googled the problem but found only a bunch of unrelated entries.
I'm trying to run this (stable-diffusion):
https://colab.research.google.com/drive/1uWCe41_BSRip4y4nlcB8ESQgKtr5BfrN#scrollTo=lTRtiZZk0h5d
And following a guide "for better RAM usage" replaced:
https://github.com/CompVis/stable-diffusion.git
with:
https://github.com/chemistzombie/stable-diffusion-unfiltered.git
And am now getting following error code:
Cloning into 'stable-diffusion-unfiltered'...
remote: Enumerating objects: 323, done.
remote: Total 323 (delta 0), reused 0 (delta 0), pack-reused 323
Receiving objects: 100% (323/323), 42.64 MiB | 37.09 MiB/s, done.
Resolving deltas: 100% (109/109), done.
FileNotFoundError Traceback (most recent call last)
in
2 get_ipython().system('git clone https://github.com/chemistzombie/stable-diffusion-unfiltered.git')
3 import os
----> 4 os.chdir('stable-diffusion')
FileNotFoundError: [Errno 2] No such file or directory: 'stable-diffusion'
I checked the link and the replacement file still exists.
I looked for any typos, checked if the linked file still exists and googled any possibly already available troubleshoot.
Anyone familiar with this?

I encountered the same problem. In the "Download the Repository" section, you'll need to add "-unfiltered" after stable-diffusion. For me, it threw another error when trying to view the images. Again, need to add "-unfiltered" after stable-diffusion if you get that problem.
Clone the repo
!git clone https://github.com/chemistzombie/stable-diffusion-unfiltered.git
import os
os.chdir('stable-diffusion-unfiltered')

Related

Writing apache beam pCollection to bigquery causes type Error

I have a simple beam pipeline, as follows:
with beam.Pipeline() as pipeline:
output = (
pipeline
| 'Read CSV' >> beam.io.ReadFromText('raw_files/myfile.csv',
skip_header_lines=True)
| 'Split strings' >> beam.Map(lambda x: x.split(','))
| 'Convert records to dictionary' >> beam.Map(to_json)
| beam.io.WriteToBigQuery(project='gcp_project_id',
dataset='datasetID',
table='tableID',
create_disposition=bigquery.CreateDisposition.CREATE_NEVER,
write_disposition=bigquery.WriteDisposition.WRITE_APPEND
)
)
However upon runnning I get a typeError, stating the following:
line 2147, in __init__
self.table_reference = bigquery_tools.parse_table_reference(if isinstance(table,
TableReference):
TypeError: isinstance() arg 2 must be a type or tuple of types
I have tried defining a TableReference object and passing it to the WriteToBigQuery class but still facing the same issue. Am I missing something here? I've been stuck at this step for what feels like forever and I don't know what to do. Any help is appreciated!
This probably occurred since you installed Apache Beam without the GCP modules. Please make sure to do following (in a virtual environment).
pip install apache-beam[gcp]
It's a weird error though so feel free to file a Github issue against the Apache Beam project.

saleae logic 2 package in notebook

it is my first time with saleae.
I’ve installed it on my windows machine and launch a notebook. My problem
is that I can’t create an Saleae object. Here is my code
import saleae
from saleae import Saleae
s = Saleae()
I’m having this error message:
INFO:saleae.saleae:Could not connect to Logic software, attempting to launch it now
Output exceeds the size limit. Open the full output data in a text editor
ConnectionRefusedError Traceback (most recent call last) File ....\lib\site-packages\saleae\saleae.py:211, in Saleae.init(self, host, port, quiet, args) 210 self._s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) → 211 self._s.connect((host, port)) 212 except ConnectionRefusedError: ConnectionRefusedError: [WinError 10061] Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée
how can I solve the issue ?
I found the solution by reverting back to Logic 1.2.40.

Graphlab Create setup error: graphlab.get_dependencies() results in BadZipFile error

After installing Graphlab Create on Win 10, it asks us to install 2 dependencies using graphlab.get_dependencies().
However, I am getting the following error:
In [9]: gl.get_dependencies()
By running this function, you agree to the following licenses.
* libstdc++: https://gcc.gnu.org/onlinedocs/libstdc++/manual/license.html
* xz: http://git.tukaani.org/?p=xz.git;a=blob;f=COPYING
Downloading xz.
Extracting xz.
---------------------------------------------------------------------------
BadZipfile Traceback (most recent call last)
in ()
----> 1 gl.get_dependencies()
C:\Users\nikulk\Anaconda2\envs\gl-env\lib\site-packages\graphlab\dependencies.pyc in get_dependencies()
34 xzarchive_dir = tempfile.mkdtemp()
35 print('Extracting xz.')
---> 36 xzarchive = zipfile.ZipFile(xzarchive_file)
37 xzarchive.extractall(xzarchive_dir)
38 xz = os.path.join(xzarchive_dir, 'bin_x86-64', 'xz.exe')
C:\Users\nikulk\Anaconda2\envs\gl-env\lib\zipfile.pyc in __init__(self, file, mode, compression, allowZip64)
768 try:
769 if key == 'r':
--> 770 self._RealGetContents()
771 elif key == 'w':
772 # set the modified flag so central directory gets written
C:\Users\nikulk\Anaconda2\envs\gl-env\lib\zipfile.pyc in _RealGetContents(self)
809 raise BadZipfile("File is not a zip file")
810 if not endrec:
--> 811 raise BadZipfile, "File is not a zip file"
812 if self.debug > 1:
813 print endrec
BadZipfile: File is not a zip file
Anyone knows how to resolve?
If you get this error, a firewall might be blocking you from downloading a dependency. Here is some information and a work around:
Please see the SFrame source code for get_dependencies to see how GraphLab uses this package: https://github.com/turicode/SFrame/blob/master/oss_src/unity/python/sframe/dependencies.py
The xz utility is only used to extract runtime dependencies from the other file downloaded there (from repo.msys2.org): http://repo.msys2.org/mingw/x86_64/mingw-w64-x86_64-gcc-libs-5.1.0-1-any.pkg.tar.xz. Two DLLs from that file need to be extracted into the "cython" directory inside the GraphLab Create install path (typically something like lib/site-packages/python2.7/graphlab within a virtualenv or conda env). Once extracted the dependency issue should be resolved.
In graphlab folder make the folder writable.Initially it is only readable.Go to properties of folder undo the only read option.Hope it solve your problem.

plone.app.blob RuntimeError while migrating ATFile

I'm doing a blob migration onto a 3.2.1 site and I'm getting "RuntimeError: maximum recursion depth exceeded while calling a Python object error." on some file during ##blob-file-migration.
I found this http://svn.eionet.europa.eu/projects/Zope/ticket/4190 and it looks like they solved this problem for images by creating a custom migrator.
Any clue? Traceback below.
File "/home/simahawk/dev/plone/plone3/projx/src/plone.app.blob/src/plone/app/blob/content.py", line 113, in setFile
mutator = self.getField('file').getMutator(self)
File "/home/simahawk/dev/plone/plone3/buildout/eggs/Products.Archetypes-1.5.10-py2.4.egg/Products/Archetypes/BaseObject.py", line 241, in getField
return self.Schema().get(key)
File "/home/simahawk/dev/plone/plone3/buildout/eggs/Products.Archetypes-1.5.10-py2.4.egg/Products/Archetypes/BaseObject.py", line 828, in Schema
schema = ISchema(self)
File "/home/simahawk/dev/plone/plone3/projx/parts/zope2/lib/python/zope/app/component/hooks.py", line 96, in adapter_hook
return siteinfo.adapter_hook(interface, object, name, default)
File "/home/simahawk/dev/plone/plone3/buildout/eggs/archetypes.schemaextender-2.1.1-py2.4.egg/archetypes/schemaextender/extender.py", line 143, in cachingInstanceSchemaFactory
key = IUUID(context, str(id(context)))
File "/home/simahawk/dev/plone/plone3/projx/parts/zope2/lib/python/zope/app/component/hooks.py", line 96, in adapter_hook
return siteinfo.adapter_hook(interface, object, name, default)
RuntimeError: maximum recursion depth exceeded in cmp
2013-03-06 10:16:49 INFO ATCT.migration Rolling back to last safe point
When migration from Plone 3.x to Plone 4.x using Products.contentmigration I got teh same error. It seemed there was a bug in plone.app.blob migration. We made this custom migration to bypass the recursion error: http://svn.eionet.europa.eu/projects/Zope/browser/trunk/Products.EEAPloneAdmin/trunk/Products/EEAPloneAdmin/Extensions/ImageFS2Image.py?rev=29656
The problem is at.schemaextender version (2.1.1). Down-pinning to 1.6.0 solved the issue. This also solved a random KeyError on a 3.3.5 site. I think this is related to #12051 and #11396. It looks like these are common problems with newer version of at.schemaextender but in the package's README there's no info for Plone 3.x.

How do I set a backend for django-celery. I set CELERY_RESULT_BACKEND, but it is not recognized

I set CELERY_RESULT_BACKEND = "amqp" in celeryconfig.py
but I get:
>>> from tasks import add
>>> result = add.delay(3,5)
>>> result.ready()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/djangoprojects/venv/local/lib/python2.7/site-packages/celery/result.py", line 105, in ready
return self.state in self.backend.READY_STATES
File "/djangoprojects/venv/local/lib/python2.7/site-packages/celery/result.py", line 184, in state
return self.backend.get_status(self.task_id)
File "/djangoprojects/venv/local/lib/python2.7/site-packages/celery/backends/base.py", line 414, in _is_disabled
raise NotImplementedError("No result backend configured. "
NotImplementedError: No result backend configured. Please see the documentation for more information.
I just went through this so I can shed some light on this. One might think for all of the great documentation stating some of this would have been a bit more obvious.
I'll assume you have both RabbitMQ up and functioning (it needs to be running), and that you have dj-celery installed.
Once you have that then all you need to do is to include this single line in your setting.py file.
BROKER_URL = "amqp://guest:guest#localhost:5672//"
Then you need to run syncdb and start this thing up using:
python manage.py celeryd -E -B --loglevel=info
The -E states that you want events captured and the -B states you want celerybeats running. The former enable you to actually see something in the admin window and the later allows you to schedule. Finally you need to ensure that you are actually going to capture the events and the status. So in another terminal run this:
./manage.py celerycam
And then finally your able to see the working example provided in the docs.. -- Again assuming you created the tasks.py that is says to.
>>> result = add.delay(4, 4)
>>> result.ready() # returns True if the task has finished processing.
False
>>> result.result # task is not ready, so no return value yet.
None
>>> result.get() # Waits until the task is done and returns the retval.
8
>>> result.result # direct access to result, doesn't re-raise errors.
8
>>> result.successful() # returns True if the task didn't end in failure.
True
Furthermore then you are able to view your status in the admin panel.
I hope this helps!! I would add one more thing which helped me. Watching the RabbitMQ Log file was key as it helped me identify that django-celery was actually talking to RabbitMQ.
Are you running django celery?
If so, you need to start a python shell in the context of django (or whatever the technical term is).
Type:
python manage.py shell
And try your commands from that shell
HI tried everything to work celery v3.1.25 with Django 1.8 version nothing worked..
Finally below line helped me ,feeling happy
app = Celery('documents',backend="celery.backends.amqp:AMQPBackend")
Setting backend="celery.backends.amqp:AMQPBackend" fixed my error.