Permission denied creating group ckan 2.9.5 - permissions

I'm having an issue with my ckan 2.9.5 instance while doing anything (create groups, organizations, uploading file....). Every time I try to do something I get a Permission denied: '/var/lib/ckan/storage/uploads/group' even if is an sysadmin user.
I tried giving full permissions to the /var/lib/ckan/storage but anything happens.
These are the permissions of the folder
And this is the error log:
File "/usr/lib/ckan/venv/lib/python3.8/site-packages/flask/app.py", line 2449, in wsgi_app
response = self.handle_exception(e)
File "/usr/lib/ckan/venv/lib/python3.8/site-packages/flask/app.py", line 1866, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/ckan/venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/lib/ckan/venv/lib/python3.8/site-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/ckan/venv/lib/python3.8/site-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/ckan/venv/lib/python3.8/site-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/ckan/venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/lib/ckan/venv/lib/python3.8/site-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/ckan/venv/lib/python3.8/site-packages/flask_debugtoolbar/__init__.py", line 125, in dispatch_request
return view_func(**req.view_args)
File "/usr/lib/ckan/venv/lib/python3.8/site-packages/flask/views.py", line 89, in view
return self.dispatch_request(*args, **kwargs)
File "/usr/lib/ckan/venv/lib/python3.8/site-packages/flask/views.py", line 163, in dispatch_request
return meth(*args, **kwargs)
File "/usr/lib/ckan/venv/src/ckan/ckan/views/group.py", line 859, in post
group = _action(u'group_create')(context, data_dict)
File "/usr/lib/ckan/venv/src/ckan/ckan/logic/__init__.py", line 504, in wrapped
result = _action(context, data_dict, **kw)
File "/usr/lib/ckan/venv/src/ckan/ckan/logic/action/create.py", line 871, in group_create
return _group_or_org_create(context, data_dict)
File "/usr/lib/ckan/venv/src/ckan/ckan/logic/action/create.py", line 701, in _group_or_org_create
upload = uploader.get_uploader('group')
File "/usr/lib/ckan/venv/src/ckan/ckan/lib/uploader.py", line 60, in get_uploader
upload = Upload(upload_to, old_filename)
File "/usr/lib/ckan/venv/src/ckan/ckan/lib/uploader.py", line 126, in __init__
os.makedirs(self.storage_path)
File "/usr/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/var/lib/ckan/storage/uploads/group'
Thanx for any help.

If you're doing this as the 'ckan' user, I think you're getting this error because the storage folder is probably owned by 'root'. You should give folder owner to user 'ckan'.

I had the same problem with a Docker instance of ckan.
Solution (do this in the ckan container as root user):
$ cd /var/lib/ckan
$ ls -l
total 8
drwxr-xr-x 3 root root 4096 Sep 7 17:17 storage
drwxr-xr-x 5 ckan ckan 4096 Sep 7 17:18 webassets
$ chown -R ckan.ckan storage
$ ls -l
total 8
drwxr-xr-x 3 ckan ckan 4096 Sep 7 17:17 storage
drwxr-xr-x 5 ckan ckan 4096 Sep 7 17:18 webassets
Now CKAN works smoothly.

Related

Errno 24 Too many open files when using Psychopy

I have 40 videos that I would like to present in a random loop 15 times. Even though the experiment runs all the way through my .csv file output does not save and I get an error saying too many files are open
If I change the number of repetitions to 5 it works and saves all the data.
Anything above 5 has the error [Errno 24] Too many open files and saves only the .log file.
Is there piece of code I can add in my .py to close each file after its shown? Or is it an Operating System problem? I use Windows 7. Any ideas are greatly appreciated.
Below the entire output message.
################## Running: F:\Movies\Block 1_lastrun.py ##################
Traceback (most recent call last):
File “U:\Final draft\Block 1_lastrun.py”, line 189, in
File “C:\Program Files (x86)\PsychoPy2\lib\site-packages\psychopy\visual\movie3.py”, line 134, in init
File “C:\Program Files (x86)\PsychoPy2\lib\site-packages\psychopy\visual\movie3.py”, line 180, in loadMovie
File “C:\Program Files (x86)\PsychoPy2\lib\site-packages\moviepy\video\io\VideoFileClip.py”, line 81, in init
File “C:\Program Files (x86)\PsychoPy2\lib\site-packages\moviepy\video\io\ffmpeg_reader.py”, line 32, in init
File “C:\Program Files (x86)\PsychoPy2\lib\site-packages\moviepy\video\io\ffmpeg_reader.py”, line 256, in ffmpeg_parse_infos
File “C:\Program Files (x86)\PsychoPy2\lib\subprocess.py”, line 745, in init
OSError: [Errno 24] Too many open files
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File “C:\Program Files (x86)\PsychoPy2\lib\atexit.py”, line 24, in _run_exitfuncs
File “C:\Program Files (x86)\PsychoPy2\lib\site-packages\psychopy\data\experiment.py”, line 366, in close
File “C:\Program Files (x86)\PsychoPy2\lib\site-packages\psychopy\data\experiment.py”, line 351, in saveAsPickle
File “C:\Program Files (x86)\PsychoPy2\lib\site-packages\psychopy\tools\filetools.py”, line 149, in openOutputFile
File “C:\Program Files (x86)\PsychoPy2\lib\codecs.py”, line 896, in open
IOError: [Errno 24] Too many open files: u’U:\Final draft\data/_BM-Stimulation_2018_Jun_13_1535.psydat’
Error in sys.exitfunc:
Traceback (most recent call last):
File “C:\Program Files (x86)\PsychoPy2\lib\atexit.py”, line 24, in _run_exitfuncs
func(*targs, **kargs)
File “C:\Program Files (x86)\PsychoPy2\lib\site-packages\psychopy\data\experiment.py”, line 366, in close
self.saveAsPickle(self.dataFileName)
File “C:\Program Files (x86)\PsychoPy2\lib\site-packages\psychopy\data\experiment.py”, line 351, in saveAsPickle
fileCollisionMethod=fileCollisionMethod) as f:
File “C:\Program Files (x86)\PsychoPy2\lib\site-packages\psychopy\tools \filetools.py”, line 149, in openOutputFile
f = codecs.open(fileName, mode=mode, encoding=encoding)
File “C:\Program Files (x86)\PsychoPy2\lib\codecs.py”, line 896, in open
file = builtin.open(filename, mode, buffering)
IOError: [Errno 24] Too many open files: u’F:\Movies\data/_BM- Stimulation_2018_Jun_26_1533.psydat’

LetsEncrypt Certbot-Auto freezes when trying to run any command on Apache

I am trying to get LetsEncrypt SSL certificates installed on a Centos 6 server using Cerbot-Auto, however no matter what I try, it just hangs on:
Apache version is 2.2.15
Command
./certbot-auto -v
When I press CTRL + C to exit the program, it takes about 15 seconds and then exits with a stack trace:
Exiting abnormally:
Traceback (most recent call last):
File "/opt/eff.org/certbot/venv/bin/letsencrypt", line 9, in <module>
load_entry_point('letsencrypt==0.7.0', 'console_scripts', 'letsencrypt')()
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/main.py", line 1240, in main
return config.func(config, plugins)
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/main.py", line 981, in run
installer, authenticator = plug_sel.choose_configurator_plugins(config, plugins, "run")
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/plugins/selection.py", line 189, in choose_configurator_plugins
authenticator = installer = pick_configurator(config, req_inst, plugins)
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/plugins/selection.py", line 25, in pick_configurator
(interfaces.IAuthenticator, interfaces.IInstaller))
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/plugins/selection.py", line 77, in pick_plugin
verified.prepare()
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/plugins/disco.py", line 248, in prepare
return [plugin_ep.prepare() for plugin_ep in six.itervalues(self._plugins)]
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/plugins/disco.py", line 248, in <listcomp>
return [plugin_ep.prepare() for plugin_ep in six.itervalues(self._plugins)]
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/plugins/disco.py", line 130, in prepare
self._initialized.prepare()
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot_apache/configurator.py", line 225, in prepare
self.parser = self.get_parser()
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot_apache/override_centos.py", line 39, in get_parser
self.version, configurator=self)
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot_apache/override_centos.py", line 47, in __init__
super(CentOSParser, self).__init__(*args, **kwargs)
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot_apache/parser.py", line 74, in __init__
if self.find_dir("Define", exclude=False):
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot_apache/parser.py", line 401, in find_dir
"%s//*[self::directive=~regexp('%s')]" % (start, regex))
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/augeas.py", line 413, in match
ctypes.byref(array))
KeyboardInterrupt
Please see the logfiles in /var/log/letsencrypt for more details.
I thought it may be a python version issue but when checked, the server is running Python 2.6.6, which, according to the Certbot System Requirements is acceptable.
Letsencrypt.log
When I checked the log, it is exactly the same stacktrace as was reported by the script previously.
Any ideas?

Workflow Disconnection

When running a workflow from the repo like:
snakemake --use-conda -p --cluster-config cluster.yaml --cluster "qsub -l {cluster.l} -m {cluster.m} -N {cluster.N} -r {cluster.r} -V" --jobs 1
My job starts but runs into a urllib related error (see below). I'm running v3.13.3 on a compute server. Any tips on how to avoid this? Thanks in advance.
File "miniconda3/lib/python3.5/site-packages/snakemake/__init__.py", line 469, in snakemake
force_use_threads=use_threads)
File "miniconda3/lib/python3.5/site-packages/snakemake/workflow.py", line 450, in execute
dag.create_conda_envs(dryrun=dryrun)
File "miniconda3/lib/python3.5/site-packages/snakemake/dag.py", line 166, in create_conda_envs
hash = env.hash
File "miniconda3/lib/python3.5/site-packages/snakemake/conda.py", line 55, in hash
md5hash.update(self.content)
File "miniconda3/lib/python3.5/site-packages/snakemake/conda.py", line 38, in content
content = urlopen(env_file).read()
File "miniconda3/lib/python3.5/urllib/request.py", line 163, in urlopen
return opener.open(url, data, timeout)
File "miniconda3/lib/python3.5/urllib/request.py", line 466, in open
response = self._open(req, data) ...
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 101] Network is unreachable>

mrjob fail to mkdir hadoop directory

This is my first time using mrjob, however I encounter the following problems when executing the relevant python script using mrjob:
No configs found; falling back on auto-configuration
Looking for hadoop binary in /home/work/alex/tools/hadoop-client-1.5.5/hadoop/bin...
Found hadoop binary: /home/work/alex/tools/hadoop-client-1.5.5/hadoop/bin/hadoop
Creating temp directory /tmp/simrank_mr.work.20161204.050846.350418
Using Hadoop version 2
STDERR: 16/12/04 13:08:48 INFO common.UpdateService: ZkstatusUpdater to hn01-lp-hdfs.dmop.ac.com:54310 started
STDERR: mkdir: cannot create directory -p: File exists
STDERR: java.io.IOException: cannot create directory -p: File exists
STDERR: at org.apache.hadoop.fs.FsShell.mkdir(FsShell.java:1020)
STDERR: at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1934)
STDERR: at org.apache.hadoop.fs.FsShell.run(FsShell.java:2259)
STDERR: at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
STDERR: at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
STDERR: at org.apache.hadoop.fs.FsShell.main(FsShell.java:2331)
Traceback (most recent call last):
File "simrank_mr.py", line 121, in <module>
MRSimRank.run()
File "/home/work/.jumbo/lib/python2.7/site-packages/mrjob-0.5.6-py2.7.egg/mrjob/job.py", line 429, in run
mr_job.execute()
File "/home/work/.jumbo/lib/python2.7/site-packages/mrjob-0.5.6-py2.7.egg/mrjob/job.py", line 447, in execute
super(MRJob, self).execute()
File "/home/work/.jumbo/lib/python2.7/site-packages/mrjob-0.5.6-py2.7.egg/mrjob/launch.py", line 158, in execute
self.run_job()
File "/home/work/.jumbo/lib/python2.7/site-packages/mrjob-0.5.6-py2.7.egg/mrjob/launch.py", line 228, in run_job
runner.run()
File "/home/work/.jumbo/lib/python2.7/site-packages/mrjob-0.5.6-py2.7.egg/mrjob/runner.py", line 481, in run
self._run()
File "/home/work/.jumbo/lib/python2.7/site-packages/mrjob-0.5.6-py2.7.egg/mrjob/hadoop.py", line 335, in _run
self._upload_local_files_to_hdfs()
File "/home/work/.jumbo/lib/python2.7/site-packages/mrjob-0.5.6-py2.7.egg/mrjob/hadoop.py", line 362, in _upload_local_files_to_hdfs
self.fs.mkdir(self._upload_mgr.prefix)
File "/home/work/.jumbo/lib/python2.7/site-packages/mrjob-0.5.6-py2.7.egg/mrjob/fs/composite.py", line 76, in mkdir
return self._do_action('mkdir', path)
File "/home/work/.jumbo/lib/python2.7/site-packages/mrjob-0.5.6-py2.7.egg/mrjob/fs/composite.py", line 63, in _do_action
raise first_exception
IOError: Could not mkdir hdfs:///user/work/alex/tmp/cluster/mrjob/tmp/tmp/simrank_mr.work.20161204.050846.350418/files/
Anyone knows how to solve this problem? Many thanks!

Using redis with bokeh-server. Permission denied: '/bokehpids.json'

I'm trying to run bokeh-server with supervisor with redis as a backend and I get this error message on startup:
Traceback (most recent call last):
File "/usr/share/nginx/test-status/flask/bin/bokeh-server", line 7, in <module>
bokeh.server.run()
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/__init__.py", line 175, in run
start_server(args)
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/__init__.py", line 179, in start_server
start.start_simple_server(args)
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/start.py", line 54, in start_simple_server
start_redis()
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/start.py", line 40, in start_redis
save=redis_save)
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/services.py", line 81, in start_redis
stdin=subprocess.PIPE
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/services.py", line 32, in __init__
self.add_to_pidfile()
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/services.py", line 46, in add_to_pidfile
with open(self.pidfilename, "w+") as f:
IOError: [Errno 13] Permission denied: '/bokehpids.json'
Note that I can run the server with supervisor if I use memory as the backend, and I can run bokeh-server manually with redis as a backend just fine. Does anyone know where the permissions I should change lie?
Turns out it was trying to access the pidfile in the root directory...
I solved this by changing the directory in the supervisor config file:
[program:bokeh]
...
directory=/usr/share/nginx/test-status
...