When I try yo schedule a job after I have deployed a project I get the following error:
Traceback (most recent call last):
File "/usr/lib/pymodules/python2.7/scrapyd/webservice.py", line 18, in render
return JsonResource.render(self, txrequest)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/txweb.py", line 10, in render
r = resource.Resource.render(self, txrequest)
File "/usr/local/lib/python2.7/dist-packages/twisted/web/resource.py", line 250, in render
return m(request)
File "/usr/lib/pymodules/python2.7/scrapyd/webservice.py", line 37, in render_POST
self.root.scheduler.schedule(project, spider, **args)
File "/usr/lib/pymodules/python2.7/scrapyd/scheduler.py", line 16, in schedule
q.add(spider_name, **spider_args)
File "/usr/lib/pymodules/python2.7/scrapyd/spiderqueue.py", line 18, in add
self.q.put(d, priority)
File "/usr/lib/pymodules/python2.7/scrapyd/sqlite.py", line 103, in put
self.conn.execute(q, args)
OperationalError: attempt to write a readonly database
If after deploying the project I go and restart scrapyd and after schedule a job there is no problems with it and works fine. But to be honest I do not see the point of going and restarting scrapyd everytime I deploy... does not make sense.
I have checked the DB folder and there is a crawling_project.db file with root:root ownsership this could be causing the issue??
Related
Im trying to create a new module structure using the scaffold command in src/odoo/addons of the Production stage via odoo.sh's the editor.
~/src/odoo/addons$ odoo-bin scaffold mymodule
But I got the error like this:
Traceback (most recent call last):
File "/home/odoo/src/odoo/odoo-bin", line 8, in <module>
odoo.cli.main()
File "/home/odoo/src/odoo/odoo/cli/command.py", line 60, in main
o.run(args)
File "/home/odoo/src/odoo/odoo/cli/scaffold.py", line 39, in run
{'name': args.name})
File "/home/odoo/src/odoo/odoo/cli/scaffold.py", line 121, in render_to
os.makedirs(destdir)
File "/usr/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
OSError: [Errno 30] Read-only file system: '/home/odoo/src/odoo/addons/mymodule'
If creating the module structure in another stages, I can create it normally.
Am I not permission to create modules in the production stage, or the production stage is just used for merge with other stages?
Please help!
Thank you!
Does the /home/odoo/src/odoo/addons/mymodule directory has write permission?
I tried to run the below command
conda create --name tf_gpu tensorflow-gpu
and it throws the error
Traceback (most recent call last):
File "C:\Users\alexk\Anaconda3\lib\site-packages\conda\exceptions.py", line 1062, in call
return func(*args, **kwargs)
File "C:\Users\alexk\Anaconda3\lib\site-packages\conda\cli\main.py", line 84, in _main
exit_code = do_call(args, p)
File "C:\Users\alexk\Anaconda3\lib\site-packages\conda\cli\conda_argparse.py", line 82, in do_call
exit_code = getattr(module, func_name)(args, parser)
File "C:\Users\alexk\Anaconda3\lib\site-packages\conda\cli\main_create.py", line 37, in execute
install(args, parser, 'create')
File "C:\Users\alexk\Anaconda3\lib\site-packages\conda\cli\install.py", line 116, in install
if context.use_only_tar_bz2:
File "C:\Users\alexk\Anaconda3\lib\site-packages\conda\base\context.py", line 664, in use_only_tar_bz2
import conda_package_handling.api
File "C:\Users\alexk\Anaconda3\lib\site-packages\conda_package_handling\api.py", line 12, in
from .tarball import CondaTarBZ2 as _CondaTarBZ2, libarchive_enabled
File "C:\Users\alexk\Anaconda3\lib\site-packages\conda_package_handling\tarball.py", line 11, in
import libarchive
File "C:\Users\alexk\Anaconda3\lib\site-packages\libarchive__init__.py", line 1, in
from .entry import ArchiveEntry
File "C:\Users\alexk\Anaconda3\lib\site-packages\libarchive\entry.py", line 6, in
from . import ffi
File "C:\Users\alexk\Anaconda3\lib\site-packages\libarchive\ffi.py", line 27, in
libarchive = ctypes.cdll.LoadLibrary(libarchive_path)
File "C:\Users\alexk\Anaconda3\lib\ctypes__init__.py", line 434, in LoadLibrary
return self._dlltype(name)
File "C:\Users\alexk\Anaconda3\lib\ctypes__init__.py", line 356, in init
self._handle = _dlopen(self._name, mode)
OSError: [WinError 127] The specified procedure could not be found
and also a error window pops up saying:
The procedure entry point gzdirect could not be located in the dynamic link library C:\User\\Anaconda3\Library\bin\libxml2.dll
I am answering quite lately but I had a similar issue recently and this post was the closest to my issue (and had no answers).
The issue for me though was that I was trying to communicate with matlab from c++ code (using visual c++ in visual studio).
After searching a long time, I discovered that gzdirect came from zlib1.dll and not libxml.
What caused this issue for me was that I added the path to the matlab dlls at the end of my path environnment variable and because of that, a zlib from my system was used instead of Matlab Zlib.
All I had to do was to add the path to Matlab's DLL at the start of my path and not at the end.
I tried to deploy a test Scrapy project to the remote Scrapyd server. I got the following error message in client side.
curl http://IP:6800/addversion.json -d project=test_project -d spider=quotes
{"status": "error", "message": "'version'", "node_name": "serverName"}
Error message in server-side
2018-11-13T12:22:22+0000 [_GenericHTTPChannelProtocol,0,IP Address] Unhandled Error
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/twisted/web/http.py", line 2190, in allContentReceived
req.requestReceived(command, path, version)
File "/usr/lib64/python2.7/site-packages/twisted/web/http.py", line 917, in requestReceived
self.process()
File "/usr/lib64/python2.7/site-packages/twisted/web/server.py", line 199, in process
self.render(resrc)
File "/usr/lib64/python2.7/site-packages/twisted/web/server.py", line 259, in render
body = resrc.render(self)
--- <exception caught here> ---
File "/usr/lib/python2.7/site-packages/scrapyd/webservice.py", line 21, in render
return JsonResource.render(self, txrequest).encode('utf-8')
File "/usr/lib/python2.7/site-packages/scrapyd/utils.py", line 20, in render
r = resource.Resource.render(self, txrequest)
File "/usr/lib64/python2.7/site-packages/twisted/web/resource.py", line 250, in render
return m(request)
File "/usr/lib/python2.7/site-packages/scrapyd/webservice.py", line 83, in render_POST
version = txrequest.args[b'version'][0].decode('utf-8')
exceptions.KeyError: 'version'
I checked both client and server sides, the Scrapy version are all 1.5.1. The python version are 2.7.*
The sample curl command you shown earlier is not supposed to work. According to the documentation, you'll also need:
A version argument, which is believed to be the cause of issue you experience now.
A egg argument containing the actual project code, otherwise scrapyd won't be able to receive it when you pass in only the project name and spider name.
I was trying to deploy my Scrapy code to AWS by using scrapyd, but I got into this issue I could not figure out. It has been two days. I saw similar problems on the web, but did not find any helpful solution to fix this issue.
2016-02-15 08:41:20+0000 [HTTPChannel,1,xx.xxx.x.xxx] Unhandled Error
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/web/http.py", line 1730, in allContentReceived
req.requestReceived(command, path, version)
File "/usr/lib/python2.7/dist-packages/twisted/web/http.py", line 826, in requestReceived
self.process()
File "/usr/lib/python2.7/dist-packages/twisted/web/server.py", line 189, in process
self.render(resrc)
File "/usr/lib/python2.7/dist-packages/twisted/web/server.py", line 238, in render
body = resrc.render(self)
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/scrapyd/webservice.py", line 17, in render
return JsonResource.render(self, txrequest)
File "/usr/local/lib/python2.7/dist-packages/scrapyd/utils.py", line 19, in render
r = resource.Resource.render(self, txrequest)
File "/usr/lib/python2.7/dist-packages/twisted/web/resource.py", line 249, in render
raise UnsupportedMethod(allowedMethods)
twisted.web.error.UnsupportedMethod: ['HEAD', 'object', 'POST']
I have tried to run the scrapy code alone in both my macbook, and EC2 server. It works in both cases. It's just not working when I use my macbook to schedule a job in EC2.
These are the steps I followed to set things up.
i tried to migrate plone 3.3.6 to a newer plone 4.0.7 version (and then to 4.3.x) but I ran in multiple errors:
Full traceback
2013-10-07 13:51:33 INFO ProgressHandler Process started (1842 objects to go)
2013-10-07 13:51:33 ERROR plone.app.upgrade Upgrade aborted. Error:
Traceback (most recent call last):
File "/Users/iie/Projects/plone4.0/rwa/eggs/Plone-4.0.7-py2.6.egg/Products/CMFPlone/MigrationTool.py", line 175, in upgrade
step['step'].doStep(setup)
File "/Users/iie/Projects/plone4.0/rwa/eggs/Products.GenericSetup-1.6.3-py2.6.egg/Products/GenericSetup/upgrade.py", line 142, in doStep
self.handler(tool)
File "/Users/iie/Projects/plone4.0/rwa/eggs/plone.app.upgrade-1.0.7-py2.6.egg/plone/app/upgrade/v40/betas.py", line 117, in updateIconMetadata
obj = brain.getObject()
File "/Users/iie/Projects/plone4.0/rwa/eggs/Zope2-2.12.18-py2.6-macosx-10.7-x86_64.egg/Products/ZCatalog/CatalogBrains.py", line 92, in getObject
target = parent.restrictedTraverse(path[-1])
File "/Users/iie/Projects/plone4.0/rwa/eggs/Zope2-2.12.18-py2.6-macosx-10.7-x86_64.egg/OFS/Traversable.py", line 310, in restrictedTraverse
return self.unrestrictedTraverse(path, default, restricted=True)
File "/Users/iie/Projects/plone4.0/rwa/eggs/Zope2-2.12.18-py2.6-macosx-10.7-x86_64.egg/OFS/Traversable.py", line 278, in unrestrictedTraverse
raise e
AttributeError: pa_20120810.pdf
If I delete "pa_20120810.pdf" another file throws an error, and so on ...
I hope you understand me and someone can help me
Thanks
Something to try: before migration use collective.catalogcleanup to remove broken references from your catalog. It's easy to use: add to your buildout, restart the site, go to /##collective-catalogcleanup?dry_run=false in your browser.
As collective.catalogcleanup's documentation states:
The goal is to get rid of outdated brains that could otherwise cause problems, for example during an upgrade to Plone 4.