unexpected error Errno::EACCES error=Permission denied EMR - amazon-emr

When I am trying to collect yarn logs using td-agent i am getting this exception in td-agent log .
The td-agent is installed in root
unexpected error error_class=Errno::EACCES error="Permission denied # rb_file_s_stat - /var/log/hadoop-yarn/containers/application_1540322839807_0001/container_1540322839807_0001_01_000001/stderr

I had this issue with td-agent also.
You can quick fix it in few ways:
Add the td-agent user to the same group which has access over that log file.
Run the td-agent as a root user, you need to modify this in service file.
Waiting for the tool developers to fix it.
If anyone has a simpler work around, they are also welcome to post the solution here.

Related

Nextflow: permission denied for files in bin with -rwxrwxr-x permissions granted

I've made a fresh install of nextflow in a new computer, and I was trying to test the nf-core/rnaseq pipeline, but I am receiving the following error when executing:
Error executing process > 'NFCORE_RNASEQ:RNASEQ:INPUT_CHECK:SAMPLESHEET_CHECK (samplesheet.csv)'
Command error:
.command.sh: line 3: /media/Data/nextflow-rnaseq/rnaseq/bin/check_samplesheet.py: Permission denied
I've checked the permissions for the file, and has all the executing permissions:-rwxrwxr-x. I've also tried to execute it using both my working environment and singularity, and keep finding the same error.
I've also tested my own pipeline, with another project root folder and its own bin folder with custom scripts in there, and I'm having the same error.
Does anyone know if I'm missing something I should have done to make the scripts in bin accessible to nextflow?
Nextflow version: 22.04.4.5706
As Steve pointed out in a comment, the issue was related to how the filesystem was mounted (noexec), and fixing that solves the problem

Running Apache ActiveMQ Artemis, unable to login to web console due to IOException

(Windows, JDK8, and ARTEMIS_HOME set.) I downloaded v2.5.0, created a broker, and ran it.
artemis.cmd create broker1, specify login info, cd broker1 and bin\artemis.cmd run
(Understood that instance suggested not to be under ARTEMIS_HOME dir.) The webconsole renders and I can access it via localhost:8161/console. But trying to login, I get a Server Error on the web page, and the CLI shows
[org.eclipse.jetty.server.HttpChannel] /console/auth/login/:java.lang.SecurityException: java.io.IOException: \login.config (No such file or directory)
The file broker1/etc/login.config does exist. I have tried running from various directories and explicitly stating the configuration.
cd broker1/bin, artemis.cmd run -- xml:artemis-ervice.xml
But same issue. Why can't this login.config be recognized?
I believe there's a bug in the artemis.profile.cmd. It's using this:
-Djava.security.auth.login.config=%ARTEMIS_ETC_INSTANCE%\login.config
But the %ARTEMIS_ETC_INSTANCE% variable is not defined. I believe it should be using %ARTEMIS_INSTANCE_ETC_URI% instead. Can you try this? If that fixes the issue then I'll open a JIRA and sent a PR to get it fixed permanently.

Failed to create database 'metastore_db', see the next exception for details

I'm getting the following exception while trying to start the hive in Ubuntu 14.04 LTS,Caused by: java.sql.SQLException: Failed to create database 'metastore_db', see the next exception for details. Hadoop installation is correct and it's working fine. Please tell me anyone what's problem?
It is because you're not in the same folder where you have created your metadata. I was facing the same problem because I was in my main user folder. When I changed the folder from main user to hduser my hive stated working.
See the mistake
I tried to find the xml file but it was not their so I searched and found where it was.
Similar to #dk14, In my case, I was in a folder on which I had no permission to write as user, moved directory and worked fine.
The reason for above error is, the user through which you are login doesn't have permission to write in that particular directory. I mean the directory in which you are running the schematool command.
For example my setup of Apache Hive was in /opt/apache-hive-3.1.2-bin I ran the command :-
sudo chown -R hadoopusr /opt/apache-hive-3.1.2-bin/
it is happening because you are on the other folder than your hive is installed.
so first of all change directory to the folder where your hive is installed and you and after that try to run hive once again.
and the hive should work properly.
Best of luck.
After spending some(lot) of time I got that issue is with creating that directory metastore_db inside DERBY_HOME/bin path was already there and I didn't had admin access for this you either:
delete that folder by using admin rights.
open hive-site.xml inside HIVE_HOME/conf path open in notepad and check connection string there change the database name to something else, it worked for me.

Cannot set-up MyPHPadmin in Apache

I just started using Apache, but when I try to run myPHPadmin, I get this error message:
1 - Can't create/write to file '/var/folders/w1/5yx2p9mj7w9bm67gdwhqxwsr0000gn/T/#sql1ba_3_0.MYI' (Errcode: 13)
Another post in the Stack Overflow suggested changing the permissions on the XAMPP file, my.cnf, with this command:
sudo chmod 600 my.cnf
I tried running the code in Mac Terminal, but the result was "No such file or directory."
Does anyone know what I should try next?
This is a permission problem on your datadir (where MySQL wants to write files). Normally, at MySQL installation, correct permissions are set for the user who runs mysqld.
Are you sure that MySQL was installed correctly as part of XAMPP installation?

Has anyone come across this php error before, Warning: imagejpeg()?

Warning: imagejpeg() [function.imagejpeg]: Unable to open '/home/SITENAME/public_html/files/cache/052f225905c1618003df0c5088aec7a9.jpg' for writing: Permission denied in /home/SITENAME/public_html/concrete/helpers/image.php on line 172
I emptied the cache directory and still no luck, and if I change the permissions on the cache folder then I get another error and I can't use the site at all:
Warning: require_once(Zend/Cache/Backend/File.php) [function.require-once]: failed to open stream: No such file or directory in /home/MYACCOUNT/public_html/concrete/libraries/3rdparty/Zend/Cache.php on line 133
Fatal error: require_once() [function.require]: Failed opening required 'Zend/Cache/Backend/File.php' (include_path='.:/usr/lib/php:/usr/local/lib/php:/home/owen/php') in /home/MYACCOUNT/public_html/concrete/libraries/3rdparty/Zend/Cache.php on line 133
I don't get it? I've never had this problem before.
Sounds like a permissions problem to me, but we can't tell from this end.
If you can FTP (or CD) into that /home/SITENAME/public_html/files/
and see if 'files' is owned by, and has the same permissions as public_html
Then see what permissions they NEED to have for your hosting setup.
Check that directory exists.
Check if web server daemon, most of the time - www-data, has write permissions to that particular directory.
For future reference the problem was the PHP handler. It has been changed to CGI mode (as opposed to DSO) and they turned suEXEC ‘off’ - might be useful for someone down the line.